Full Text Available A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544
Leong, Victoria; Goswami, Usha
Over 30 years ago, it was suggested that difficulties in the "auditory organization" of word forms in the mental lexicon might cause reading difficulties. It was proposed that children used parameters such as rhyme and alliteration to organize word forms in the mental lexicon by acoustic similarity, and that such organization was…
Sokol V Todi
Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition -. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization . Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal  and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization . ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.
Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard
The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development. Copyright © 2012 Wiley Periodicals, Inc.
Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.
Song, Kun; Luo, Huan
Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation. PMID:28674512
Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.
Koravand, Amineh; Jutras, Benoit
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
Full Text Available Abstract Background The mammalian auditory cortex can be subdivided into various fields characterized by neurophysiological and neuroarchitectural properties and by connections with different nuclei of the thalamus. Besides the primary auditory cortex, echolocating bats have cortical fields for the processing of temporal and spectral features of the echolocation pulses. This paper reports on location, neuroarchitecture and basic functional organization of the auditory cortex of the microchiropteran bat Phyllostomus discolor (family: Phyllostomidae. Results The auditory cortical area of P. discolor is located at parieto-temporal portions of the neocortex. It covers a rostro-caudal range of about 4800 μm and a medio-lateral distance of about 7000 μm on the flattened cortical surface. The auditory cortices of ten adult P. discolor were electrophysiologically mapped in detail. Responses of 849 units (single neurons and neuronal clusters up to three neurons to pure tone stimulation were recorded extracellularly. Cortical units were characterized and classified depending on their response properties such as best frequency, auditory threshold, first spike latency, response duration, width and shape of the frequency response area and binaural interactions. Based on neurophysiological and neuroanatomical criteria, the auditory cortex of P. discolor could be subdivided into anterior and posterior ventral fields and anterior and posterior dorsal fields. The representation of response properties within the different auditory cortical fields was analyzed in detail. The two ventral fields were distinguished by their tonotopic organization with opposing frequency gradients. The dorsal cortical fields were not tonotopically organized but contained neurons that were responsive to high frequencies only. Conclusion The auditory cortex of P. discolor resembles the auditory cortex of other phyllostomid bats in size and basic functional organization. The
Lomas, Kathryn F; Greenwood, David R; Windmill, James F C; Jackson, Joseph C; Corfield, Jeremy; Parsons, Stuart
Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.
Rössler, W; Kalmring, K
The bushcricket species Decticus albifrons, Decticus verrucivorus and Pholidoptera griseoaptera (Tettigoniidae) belong to the same subfamily (Decticinae) but differ significantly in body size. In spite of the great differences in the dimensions of the forelegs, where the auditory organs are located, the most sensitive range of the hearing threshold lies between 6 and 25 kHz in each case. Only in the frequency range from 2 to 5 kHz and above 25 kHz, significant differences are present. The anatomy of the auditory receptor organs was compared quantitatively, using the techniques of semi-thin sectioning and computer-guided morphometry. The overall number of scolopidia and the length of the crista acustica differs in the three species, but the relative distribution of scolopidia along the crista acustica is very similar. Additionally, the scolopidia and their attachment structures (tectorial membrane, dorsal tracheal wall, cap cells) are of equal size at equivalent relative positions along the crista acustica. The results indicate that the constant relations and dimensions of corresponding structures within the cristae acusticae of the three species are responsible for the similarities in the tuning of the auditory thresholds.
Jeong, Jin Kwon; Tremere, Liisa A.; Ryave, Michael J.; Vuong, Victor C.; Pinaud, Raphael
Recent studies on the anatomical and functional organization of GABAergic networks in central auditory circuits of the zebra finch have highlighted the strong impact of inhibitory mechanisms on both the central encoding and processing of acoustic information in a vocal learning species. Most of this work has focused on the caudomedial nidopallium (NCM), a forebrain area postulated to be the songbird analogue of the mammalian auditory association cortex. NCM houses neurons with selective respo...
Fritzsch, Bernd; Pan, Ning; Jahan, Israt; Duncan, Jeremy S.; Kopecky, Benjamin J.; Elliott, Karen L.; Kersigo, Jennifer; Yang, Tian
The tetrapod auditory system transmits sound through the outer and middle ear to the organ of Corti or other sound pressure receivers of the inner ear where specialized hair cells translate vibrations of the basilar membrane into electrical potential changes that are conducted by the spiral ganglion neurons to the auditory nuclei. In other systems, notably the vertebrate limb, a detailed connection between the evolutionary variations in adaptive morphology and the underlying alterations in th...
Strauß, Johannes; Lakes-Harlan, Reinhard
Audition in insects is of polyphyletic origin. Tympanal ears derived from proprioceptive or vibratory receptor organs, but many questions of the evolution of insect auditory systems are still open. Despite the rather typical bauplan of the insect body, e.g., with a fixed number of segments, tympanal ears evolved at very different places, but only ensiferans have ears at the foreleg tibia, located in the tibial organ. The homology and monophyly of ensiferan ears is controversial, and no precursor organ was unambiguously identified for auditory receptors. The latter can only be identified by comparative study of recent atympanate taxa. These atympanate taxa are poorly investigated. In this paper, we report the neuroanatomy of the tibial organ of Comicus calcaris (Irish 1986), an atympanate Schizodactylid (splay-footed cricket). This representative of a Gondwana relict group has a tripartite sensory organ, homologous to tettigoniid ears. A comparison with morphology-based cladistic phylogeny indicates that the tripartite neuronal organization present in the majority of Tettigonioidea presumably preceded evolution of a hearing sense in the Tettigonioidea. Furthermore, the absence of a tripartite organ in Grylloidea argues against a monophyletic origin and homology of the cricket and katydid ears. The tracheal attachment of sensory neurons typical for ears of Tettigonioidea is present in C. calcaris and may have facilitated cooption for auditory function. The functional auditory organ was presumably formed in evolution by successive non-neural modifications of trachea and tympana. This first investigation of the neuroanatomy of Schizodactylidae suggests a non-auditory chordotonal organ as the precursor for auditory receptors of related tympanate taxa and adds evidence for the phylogenetic position of the group.
Malinowski, T.; Klepacki, J.; Wagstyl, R.
The evoked response audiometry method of testing hearing loss is presented and the results of comparative studies using subjective tonal audiometry and evoked response audiometry in tests of 56 healthy men with good hearing are discussed. The men were divided into three groups according to age and place of work: work place without increased noise; work place with noise and vibrations (at drilling machines); work place with noise and shocks (work at excavators in surface coal mines). The ERA-MKII audiometer produced by the Medelec-Amplaid firm was used. Audiometric threshhold curves for the three groups of tested men are given. At frequencies of 500, 1000 and 4000 Hz mean objective auditory threshhold was shifted by 4-9.5 dB in comparison to the subjective auditory threshold. (21 refs.) (In Polish)
The peripheral and central tonotopy of auditory receptors of the bushcricket Pholidoptera griseoaptera is described. Out of 24 auditory receptor cells of the crista acustica 18 were identified by single-cell recordings in the prothoracic ganglion and complete staining with neurobiotin. Proximal receptor cells of the crista acustica were most sensitive to 6 kHz, with medial cells being sensitive to 20-30 kHz, whereas distal cells were most sensitive to frequencies higher than 50 kHz. Projection areas within the auditory neuropile in the prothoracic ganglion were to- notopically arranged. Proximal cells projected anteriorly, medial cells ventrally and posteriorly, and distal cells to more dorsal regions. Identified receptor cells revealed an interindividual variability of tuning and central projections. Receptor cells from the intermediate organ of a bushcricket were identified for the first time. Receptors of the distal intermediate organ were broadly tuned and less sensitive than those of the crista acustica. Receptor cells of the proximal intermediate organ were most sensitive to frequencies below 10 kHz. They projected in anterior portions of the auditory neuropile, whereas cells of the distal intermediate organ had terminations spread over almost the whole auditory neuropile.
Kathryn F Lomas
Full Text Available Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.
Full Text Available Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic--semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners; a reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1 I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2 clear auditory and semantic boundaries can be found in the network representation; (3 cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4 reinforcing-by-link training leads to well-perceived auditory--semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.
Jin Kwon Jeong
Full Text Available Recent studies on the anatomical and functional organization of GABAergic networks in central auditory circuits of the zebra finch have highlighted the strong impact of inhibitory mechanisms on both the central encoding and processing of acoustic information in a vocal learning species. Most of this work has focused on the caudomedial nidopallium (NCM, a forebrain area postulated to be the songbird analogue of the mammalian auditory association cortex. NCM houses neurons with selective responses to conspecific songs and is a site thought to house auditory memories required for vocal learning and, likely, individual identification. Here we review our recent work on the anatomical distribution of GABAergic cells in NCM, their engagement in response to song and the roles for inhibitory transmission in the physiology of NCM at rest and during the processing of natural communication signals. GABAergic cells are highly abundant in the songbird auditory forebrain and account for nearly half of the overall neuronal population in NCM with a large fraction of these neurons activated by song in freely-behaving animals. GABAergic synapses provide considerable local, tonic inhibition to NCM neurons at rest and, during sound processing, may contain the spread of excitation away from un-activated or quiescent parts of the network. Finally, we review our work showing that GABA A -mediated inhibition directly regulates the temporal organization of song-driven responses in awake songbirds, and appears to enhance the reliability of auditory encoding in NCM.
Jeong, Jin Kwon; Tremere, Liisa A; Ryave, Michael J; Vuong, Victor C; Pinaud, Raphael
Recent studies on the anatomical and functional organization of GABAergic networks in central auditory circuits of the zebra finch have highlighted the strong impact of inhibitory mechanisms on both the central encoding and processing of acoustic information in a vocal learning species. Most of this work has focused on the caudomedial nidopallium (NCM), a forebrain area postulated to be the songbird analogue of the mammalian auditory association cortex. NCM houses neurons with selective responses to conspecific songs and is a site thought to house auditory memories required for vocal learning and, likely, individual identification. Here we review our recent work on the anatomical distribution of GABAergic cells in NCM, their engagement in response to song and the roles for inhibitory transmission in the physiology of NCM at rest and during the processing of natural communication signals. GABAergic cells are highly abundant in the songbird auditory forebrain and account for nearly half of the overall neuronal population in NCM with a large fraction of these neurons activated by song in freely-behaving animals. GABAergic synapses provide considerable local, tonic inhibition to NCM neurons at rest and, during sound processing, may contain the spread of excitation away from un-activated or quiescent parts of the network. Finally, we review our work showing that GABA(A)-mediated inhibition directly regulates the temporal organization of song-driven responses in awake songbirds, and appears to enhance the reliability of auditory encoding in NCM.
Fritzsch, Bernd; Pan, Ning; Jahan, Israt; Duncan, Jeremy S; Kopecky, Benjamin J; Elliott, Karen L; Kersigo, Jennifer; Yang, Tian
The tetrapod auditory system transmits sound through the outer and middle ear to the organ of Corti or other sound pressure receivers of the inner ear where specialized hair cells translate vibrations of the basilar membrane into electrical potential changes that are conducted by the spiral ganglion neurons to the auditory nuclei. In other systems, notably the vertebrate limb, a detailed connection between the evolutionary variations in adaptive morphology and the underlying alterations in the genetic basis of development has been partially elucidated. In this review, we attempt to correlate evolutionary and partially characterized molecular data into a cohesive perspective of the evolution of the mammalian organ of Corti out of the tetrapod basilar papilla. We propose a stepwise, molecularly partially characterized transformation of the ancestral, vestibular developmental program of the vertebrate ear. This review provides a framework to decipher both discrete steps in development and the evolution of unique functional adaptations of the auditory system. The combined analysis of evolution and development establishes a powerful cross-correlation where conclusions derived from either approach become more meaningful in a larger context which is not possible through exclusively evolution or development centered perspectives. Selection may explain the survival of the fittest auditory system, but only developmental genetics can explain the arrival of the fittest auditory system. [Modified after (Wagner 2011)]. © 2013 Wiley Periodicals, Inc.
Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J.
Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic–semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory–semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model. PMID:24688478
Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.
We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...
Razak, Khaleel A; Fuzessery, Zoltan M
This report maps the organization of the primary auditory cortex of the pallid bat in terms of frequency tuning, selectivity for behaviorally relevant sounds, and interaural intensity difference (IID) sensitivity. The pallid bat is unusual in that it localizes terrestrial prey by passively listening to prey-generated noise transients (1-20 kHz), while reserving high-frequency (neurons (83%) tuned neurons (62%) tuned >30 kHz responded selectively or exclusively to the 60- to 30-kHz downward frequency-modulated (FM) sweep used for echolocation. Within the low-frequency region, neurons were placed in two groups that occurred in two separate clusters: those selective for low- or high-frequency band-pass noise and suppressed by broadband noise, and neurons that showed no preference for band-pass noise over broadband noise. Neurons were organized in homogeneous clusters with respect to their binaural response properties. The distribution of binaural properties differed in the noise- and FM sweep-preferring regions, suggesting task-dependent differences in binaural processing. The low-frequency region was dominated by a large cluster of binaurally inhibited neurons with a smaller cluster of neurons with mixed binaural interactions. The FM sweep-selective region was dominated by neurons with mixed binaural interactions or monaural neurons. Finally, this report describes a cortical substrate for systematic representation of a spatial cue, IIDs, in the low-frequency region. This substrate may underlie a population code for sound localization based on a systematic shift in the distribution of activity across the cortex with sound source location.
Tuck, E J; Windmill, J F C; Robert, D
Tympanal hearing organs are widely used by insects to detect sound pressure. Such ears are relatively uncommon in the order Diptera, having only been reported in two families thus far. This study describes the general anatomical organization and experimentally examines the mechanical resonant properties of an unusual membranous structure situated on the ventral prothorax of the tsetse fly, Glossina morsitans (Diptera: Glossinidae). Anatomically, the prosternal membrane is backed by an air filled chamber and attaches to a pair of sensory chordotonal organs. Mechanically, the membrane shows a broad resonance around 5.3-7.2 kHz. Unlike previously reported dipteran tympana, a directional response to sound was not found in G. morsitans. Collectively, the morphology, the resonant properties and acoustic sensitivity of the tsetse prothorax are consistent with those of the tympanal hearing organs in Ormia sp. and Emblemasoma sp. (Tachinidae and Sarcophagidae). The production of sound by several species of tsetse flies has been repeatedly documented. Yet, clear behavioural evidence for acoustic behaviour is sparse and inconclusive. Together with sound production, the presence of an ear-like structure raises the enticing possibility of auditory communication in tsetse flies and renews interest in the sensory biology of these medically important insects.
Sense organs filter relevant information from a broad background of physical interactions and discard possible perceptual input that has not proven useful during the course of biological evolution. Sense organs not only limit the access to physical reality, under certain conditions they have a life of their own and produce responses even in the absence of physical stimulation. As a perfect example, the inner ear, the cochlea, in addition to detecting incoming sound waves, it also is capable of producing sound energy. Such "active" processes, however, seem to be necessary to push detection thresholds close to physical limits. The price that has to be paid are "cochlear artifacts" like otoacoustic emissions. In the following, measurement of sound that is emitted by the ear will be introduced as a noninvasive means to assess cochlear function and to help to unravel the mechanical interaction between sensory cells and supporting structures that ultimately leads to sensitive and sharply tuned auditory perception. One focus will be on the cochlea of echo-locating bats that use audition as the main window of perception to their environment and therefore have highest demands on cochlear performance.
Sense organs filter relevant information from a broad background of physical interactions and discard possible perceptual input that has not proven useful during the course of biological evolution. Sense organs not only limit the access to physical reality, under certain conditions they have a life of their own and produce responses even in the absence of physical stimulation. As a perfect example, the inner ear, the cochlea, in addition to detecting incoming sound waves, it also is capable of producing sound energy. Such ``active'' processes, however, seem to be necessary to push detection thresholds close to physical limits. The price that has to be paid are ``cochlear artifacts'' like otoacoustic emissions. In the following, measurement of sound that is emitted by the ear will be introduced as a noninvasive means to assess cochlear function and to help to unravel the mechanical interaction between sensory cells and supporting structures that ultimately leads to sensitive and sharply tuned auditory perception. One focus will be on the cochlea of echo-locating bats that use audition as the main window of perception to their environment and therefore have highest demands on cochlear performance.
Evans, Samuel; Davis, Matthew H.
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. PMID:26157026
Evans, Samuel; Davis, Matthew H
How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.
Fallon, James B; Irvine, Dexter R F; Shepherd, Robert K
Electrical stimulation of spiral ganglion neurons in a deafened cochlea, via a cochlear implant, provides a means of investigating the effects of the removal and subsequent restoration of afferent input on the functional organization of the primary auditory cortex (AI). We neonatally deafened 17 cats before the onset of hearing, thereby abolishing virtually all afferent input from the auditory periphery. In seven animals the auditory pathway was chronically reactivated with environmentally derived electrical stimuli presented via a multichannel intracochlear electrode array implanted at 8 weeks of age. Electrical stimulation was provided by a clinical cochlear implant that was used continuously for periods of up to 7 months. In 10 long-term deafened cats and three age-matched normal-hearing controls, an intracochlear electrode array was implanted immediately prior to cortical recording. We recorded from a total of 812 single unit and multiunit clusters in AI of all cats as adults using a combination of single tungsten and multichannel silicon electrode arrays. The absence of afferent activity in the long-term deafened animals had little effect on the basic response properties of AI neurons but resulted in complete loss of the normal cochleotopic organization of AI. This effect was almost completely reversed by chronic reactivation of the auditory pathway via the cochlear implant. We hypothesize that maintenance or reestablishment of a cochleotopically organized AI by activation of a restricted sector of the cochlea, as demonstrated in the present study, contributes to the remarkable clinical performance observed among human patients implanted at a young age.
Li, Chao-jun; Zhu, Pei-fang; Liu, Zhao-hua; Wang, Zheng-guo; Yang, Cheng; Chen, Hai-bin; Ning, Xin; Zhou, Ji-hong; Chen, Jian
To explore the protective effects of earplug and barrel on auditory organs of guinea pigs exposed to experimental blast underpressure (BUP). The hearing thresholds of the guinea pigs were assessed with auditory brainstem responses (ABR). The traumatic levels of tympanic membrane and ossicular chain were observed under stereo-microscope. The rate of outer hair cells (OHCs) loss was analyzed using a light microscope. The changes of guinea pigs protected with barrel and earplug were compared with those of the control group without any protection. An important ABR threshold shift of the guinea pigs without any protection was detected from 8h to 14d after being exposed to BUP with a peak ranging from -64.5 kPa to -69.3 kPa ( Pearplug had lower ABR threshold and total OHCs loss rate compared with the animals without any protection (Pearplug (Pearplug and barrel have protective effects against BUP-induced trauma on auditory organs of the guinea pigs and the protective effects of barrel are better than those of earplug.
Zakon, Harold; Capranica, Robert R.
Binaural cells in the superior olive normally have identical frequency sensitivities when acoustically stimulated via either ear. The precision with which central connections are reformed after auditory nerve regeneration can be determined by comparing the frequency sensitivities of the two binaural inputs to these cells. Three months after cutting the nerve and subsequent regeneration in the leopard frog, binaural cells once again have well-matched frequency sensitivities. Thus, the specificity of central connectivity that characterizes the auditory system in normal animals is restored after regeneration.
Liisa A. Tremere
Full Text Available Sex steroid hormones influence the perceptual processing of sensory signals in vertebrates. In particular, decades of research have shown that circulating levels of estrogen correlate with hearing function. The mechanisms and sites of action supporting this sensory-neuroendocrine modulation, however, remain unknown. Here we combined a molecular cloning strategy, fluorescence in-situ hybridization and unbiased quantification methods to show that estrogen-producing and -sensitive neurons heavily populate the adult mouse primary auditory cortex (AI. We also show that auditory experience in freely-behaving animals engages estrogen-producing and -sensitive neurons in AI. These estrogen-associated networks are greatly stable, and do not quantitatively change as a result of acute episodes of sensory experience. We further demonstrate the neurochemical identity of estrogen-producing and estrogen-sensitive neurons in AI and show that these cell populations are phenotypically distinct. Our findings provide the first direct demonstration that estrogen-associated circuits are highly prevalent and engaged by sensory experience in the mouse auditory cortex, and suggest that previous correlations between estrogen levels and hearing function may be related to brain-generated hormone production. Finally, our findings suggest that estrogenic modulation may be a central component of the operational framework of central auditory networks.
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences
Faronii-Butler, Kishasha O.
This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…
Bangert, M; Kalmring, K; Sickmann, T; Stephen, R; Jatho, M; Lakes-Harlan, R
The auditory organs of the tettigoniid are located just below the femoral tibial joint in the forelegs. Structurally each auditory organ consists of a tonotopically organized crista acustica and intermediate organ and associated sound conducting structures; an acoustic trachea and two lateral tympanic membranes located at the level of the receptor complex. The receptor cells and associated satellite structures are located in a channel filled with hemolymph fluid. The vibratory response characteristics of the tympanic membranes generated by sound stimulation over the frequency range 2-40 kHz have been studied using laser vibrometry. The acoustic trachea was found to be the principal structure through which sound energy reached the tympana. The velocity of propagation down the trachea was observed to be independent of the frequency and appreciably lower than the velocity of sound in free space. Structurally the tympana are found to be partially in contact with the air in the trachea and with the hemolymph in the channel containing the receptor cells. The two tympana were found to oscillate in phase, with a broad band frequency response, have linear coherent response characteristics and small time constant. Higher modes of vibration were not observed. Measurements of the pattern of vibration of the tympana showed that these structures vibrate as hinged flaps rather than vibrating stretched membranes. These findings, together with the morphology of the organ and physiological data from the receptor cells, suggest the possibility of an impedance matching function for the tympana in the transmission of acoustic energy to the receptor cells in the tettigoniid ear.
The connections of the inferior colliculus, the mammalian mid-brain auditory center, were determined in the greater horseshoe bat (Rhinolophus ferrumequinum), using the horseradish peroxidase method. In order to localize the auditory centers of this bat, brains were investigated with the aid of cell and fiber-stained material. The results show that most auditory centers are highly developed in this echolocating bat. However, the organization of the central auditory system does not generally differ from the mammalian scheme. This holds also for the organization of the superior olivary complex where a well-developed medial superior olivary nucleus was found. In addition to the ventral and dorsal nuclei of the lateral lemniscus a third well-developed nucleus has been defined which projects ipsilaterally to the inferior colliculus and which was called the intermediate nucleus of the lateral leminiscus. All nuclei of the central auditory pathway project ipsi-, contra-, or bilaterally to the central nucleus of the inferior colliculus with the exception of the medial nucleus of the trapezoid body and the medial geniculate body. The tonotopic organization of these projections and their possible functions are discussed in context with neurophysiological investigations.
Evolutionary diversification of the auditory organ sensilla in Neoconocephalus katydids (Orthoptera: Tettigoniidae) correlates with acoustic signal diversification over phylogenetic relatedness and life history.
Strauß, J; Alt, J A; Ekschmitt, K; Schul, J; Lakes-Harlan, R
Neoconocephalus Tettigoniidae are a model for the evolution of acoustic signals as male calls have diversified in temporal structure during the radiation of the genus. The call divergence and phylogeny in Neoconocephalus are established, but in tettigoniids in general, accompanying evolutionary changes in hearing organs are not studied. We investigated anatomical changes of the tympanal hearing organs during the evolutionary radiation and divergence of intraspecific acoustic signals. We compared the neuroanatomy of auditory sensilla (crista acustica) from nine Neoconocephalus species for the number of auditory sensilla and the crista acustica length. These parameters were correlated with differences in temporal call features, body size, life histories and different phylogenetic positions. By this, adaptive responses to shifting frequencies of male calls and changes in their temporal patterns can be evaluated against phylogenetic constraints and allometry. All species showed well-developed auditory sensilla, on average 32-35 between species. Crista acustica length and sensillum numbers correlated with body size, but not with phylogenetic position or life history. Statistically significant correlations existed also with specific call patterns: a higher number of auditory sensilla occurred in species with continuous calls or slow pulse rates, and a longer crista acustica occurred in species with double pulses or slow pulse rates. The auditory sensilla show significant differences between species despite their recent radiation, and morphological and ecological similarities. This indicates the responses to natural and sexual selection, including divergence of temporal and spectral signal properties. Phylogenetic constraints are unlikely to limit these changes of the auditory systems. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Chin Michael T
Full Text Available Abstract Background During mouse development, the precursor cells that give rise to the auditory sensory organ, the organ of Corti, are specified prior to embryonic day 14.5 (E14.5. Subsequently, the sensory domain is patterned precisely into one row of inner and three rows of outer sensory hair cells interdigitated with supporting cells. Both the restriction of the sensory domain and the patterning of the sensory mosaic of the organ of Corti involve Notch-mediated lateral inhibition and cellular rearrangement characteristic of convergent extension. This study explores the expression and function of a putative Notch target gene. Results We report that a putative Notch target gene, hairy-related basic helix-loop-helix (bHLH transcriptional factor Hey2, is expressed in the cochlear epithelium prior to terminal differentiation. Its expression is subsequently restricted to supporting cells, overlapping with the expression domains of two known Notch target genes, Hairy and enhancer of split homolog genes Hes1 and Hes5. In combination with the loss of Hes1 or Hes5, genetic inactivation of Hey2 leads to increased numbers of mis-patterned inner or outer hair cells, respectively. Surprisingly, the ectopic hair cells in Hey2 mutants are accompanied by ectopic supporting cells. Furthermore, Hey2-/-;Hes1-/- and Hey2-/-;Hes1+/- mutants show a complete penetrance of early embryonic lethality. Conclusion Our results indicate that Hey2 functions in parallel with Hes1 and Hes5 in patterning the organ of Corti, and interacts genetically with Hes1 for early embryonic development and survival. Our data implicates expansion of the progenitor pool and/or the boundaries of the developing sensory organ to account for patterning defects observed in Hey2 mutants.
Michael H Graber
Full Text Available Motor functions are often guided by sensory experience, most convincingly illustrated by complex learned behaviors. Key to sensory guidance in motor areas may be the structural and functional organization of sensory inputs and their evoked responses. We study sensory responses in large populations of neurons and neuron-assistive cells in the songbird motor area HVC, an auditory-vocal brain area involved in sensory learning and in adult song production. HVC spike responses to auditory stimulation display remarkable preference for the bird's own song (BOS compared to other stimuli. Using two-photon calcium imaging in anesthetized zebra finches we measure the spatio-temporal structure of baseline activity and of auditory evoked responses in identified populations of HVC cells. We find strong correlations between calcium signal fluctuations in nearby cells of a given type, both in identified neurons and in astroglia. In identified HVC neurons only, auditory stimulation decorrelates ongoing calcium signals, less for BOS than for other sound stimuli. Overall, calcium transients show strong preference for BOS in identified HVC neurons but not in astroglia, showing diversity in local functional organization among identified neuron and astroglia populations.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Li, Tongchao; Giagtzoglou, Nikolaos; Eberl, Daniel F; Jaiswal, Sonal Nagarkar; Cai, Tiantian; Godt, Dorothea; Groves, Andrew K; Bellen, Hugo J
Myosins play essential roles in the development and function of auditory organs and multiple myosin genes are associated with hereditary forms of deafness. Using a forward genetic screen in Drosophila, we identified an E3 ligase, Ubr3, as an essential gene for auditory organ development. Ubr3 negatively regulates the mono-ubiquitination of non-muscle Myosin II, a protein associated with hearing loss in humans. The mono-ubiquitination of Myosin II promotes its physical interaction with Myosin VIIa, a protein responsible for Usher syndrome type IB. We show that ubr3 mutants phenocopy pathogenic variants of Myosin II and that Ubr3 interacts genetically and physically with three Usher syndrome proteins. The interactions between Myosin VIIa and Myosin IIa are conserved in the mammalian cochlea and in human retinal pigment epithelium cells. Our work reveals a novel mechanism that regulates protein complexes affected in two forms of syndromic deafness and suggests a molecular function for Myosin IIa in auditory organs.
Pickles, James O
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Rivolta, Marcelo N
The development of any stem-cell-based therapy (and a potential one for deafness is no exception) relies on the generation of the necessary tools: 'cell drugs' that can be safely manufactured for their clinical application. An increasing body of work has focussed on the identification, in animal models, of potential stem cell sources that could have an application for regenerative therapy in the auditory organ. A still more circumscribed effort--owing to ethical and technical difficulties--aims to obtain the actual potential therapeutic candidates (i.e. stem cells of human origin). A recently isolated population of human fetal auditory stem cells could become an ideal model for some of the challenges lying ahead regarding cochlear stem cell purification, expansion and maintenance. 2010 Elsevier Ltd. All rights reserved.
Nishimura, Masataka; Takemoto, Makoto; Song, Wen-Jie
The prevailing model of the primate auditory cortex proposes a core-belt-parabelt structure. The model proposes three auditory areas in the lateral belt region; however, it may contain more, as this region has been mapped only at a limited spatial resolution. To explore this possibility, we examined the auditory areas in the lateral belt region of the marmoset using a high-resolution optical imaging technique. Based on responses to pure tones, we identified multiple areas in the superior temporal gyrus. The three areas in the core region, the primary area (A1), the rostral area (R), and the rostrotemporal area, were readily identified from their frequency gradients and positions immediately ventral to the lateral sulcus. Three belt areas were identified with frequency gradients and relative positions to A1 and R that were in agreement with previous studies: the caudolateral area, the middle lateral area, and the anterolateral area (AL). Situated between R and AL, however, we identified two additional areas. The first was located caudoventral to R with a frequency gradient in the ventrocaudal direction, which we named the medial anterolateral (MAL) area. The second was a small area with no obvious tonotopy (NT), positioned between the MAL and AL areas. Both the MAL and NT areas responded to a wide range of frequencies (at least 2-24 kHz). Our results suggest that the belt region caudoventral to R is more complex than previously proposed, and we thus call for a refinement of the current primate auditory cortex model.
Hertwig, I; Schneider, H
Hyphessobrycon simulans has a Weberian apparatus for transmission of sound energy to the auditory organ, whereas Poecilia reticulata does not. The fine structure of the auditory organs is identical in the two species. The better hearing - expressed by large bandwidth and high sensitivity - typical of the Ostariophysi - seems to be based exclusively on the presence of the Weberian apparatus. The sensory epithelium of the saccule and the lagena is made up of hair (sensory) cells and supporting cells. The vertically orientated macula sacculi is divided into a dorsal and a ventral cell area with oppositely arranged hair-cell kinocilia. The sagitta takes up the center of the saccule and shows only three small sites with connections to the otolithic membrane. Remarkably, the dorsal sensory cells are connected to the ventral part of the otolith, but the ventral cells are connected to the dorsal part. The macula of the lagena also comprises a dorsal and a ventral cell area with oppositely arranged hair cells. The sensory cells in all maculae are of type II. They exhibit a striking apical cell protrusion, the cuticular villus. It is partially fused with the kinocilium in the contact zones and joined to the otolithic membrane. The cuticular villus probably stabilizes the long kinocilia.
Julia A Mossbridge
Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar
Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared...
Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.
Full Text Available Abstract Background Scrub typhus, a mite-transmitted zoonosis caused by Orientia tsutsugamushi, is an endemic disease in Taiwan and may be potentially fatal if diagnosis is delayed. Case presentations We encountered a 23-year-old previously healthy Taiwanese male soldier presenting with the right ear pain after training in the jungle and an eleven-day history of intermittent high fever up to 39°C. Amoxicillin/clavulanate was prescribed for otitis media at a local clinic. Skin rash over whole body and abdominal cramping pain with watery diarrhea appeared on the sixth day of fever. He was referred due to progressive dyspnea and cough for 4 days prior to admission in our institution. On physical examination, there were cardiopulmonary distress, icteric sclera, an eschar in the right external auditory canal and bilateral basal rales. Laboratory evaluation revealed thrombocytopenia, elevation of liver function and acute renal failure. Chest x-ray revealed bilateral diffuse infiltration. Doxycycline was prescribed for scrub typhus with acute respiratory distress syndrome and multiple organ failure. Fever subsided dramatically the next day and he was discharged on day 7 with oral tetracycline for 7 days. Conclusion Scrub typhus should be considered in acutely febrile patients with multiple organ involvement, particularly if there is an eschar or a history of environmental exposure in endemic areas. Rapid and accurate diagnosis, timely administration of antibiotics and intensive supportive care are necessary to decrease mortality of serious complications of scrub typhus.
Sickmann, T; Kalmring, K; Müller, A
The structure of the complex tibial organs in the fore-, mid- and hindlegs of the bushcricket Polysarcus denticauda (Tettigoniidae, Phaneropterinae) is described comparatively. As is common for bushcrickets, in each leg the tibial organs consist of the subgenual and intermediate organs and the crista acustica. Only in the forelegs are sound-transmitting structures present. They consist of the spiracle, acoustic trachea, and two tympana; the latter are not protected by tympanal covers. The tympana in P. denticauda are extremely thick, not only bordering the two tracheal branches to the outside but also forming the outer wall of the hemolymph channel. The morphology of the tracheae in the mid- and hindlegs is significantly different, causing structural differences, especially in dimensions of the hemolymph channel. The number of scolopidia of the crista acustica of the foreleg is extremely high for a bushcricket. Approximately 50 receptor cells were found, about half of them being located in the distal quarter of the long axis of this organ. Some of the receptors are positioned in parallel on the dorsal wall of the anterior tracheal branch. The number, morphology and dimensions of the scolopidia within the crista acustica of the mid- and hindlegs differ significantly from those of the forelegs, decreasing in both legs to eight and seven receptor cells, respectively. Although the dimensions of the subgenual and intermediate organs are considerably larger in the mid- and hindlegs, the number of receptor cells is approximately the same in the different legs, being somewhat higher in both receptor organs than in those of many other bushcricket species studied previously.
Temchin, Andrei N; Recio-Spinoso, Alberto; Cai, Hongxue; Ruggero, Mario A
Spatial magnitude and phase profiles for inner hair cell (IHC) depolarization throughout the chinchilla cochlea were inferred from responses of auditory-nerve fibers (ANFs) to threshold- and moderate-level tones and tone complexes. Firing-rate profiles for frequencies ≤2 kHz are bimodal, with the major peak at the characteristic place and a secondary peak at 3-5 mm from the extreme base. Response-phase trajectories are synchronous with peak outward stapes displacement at the extreme cochlear base and accumulate 1.5 period lags at the characteristic places. High-frequency phase trajectories are very similar to the trajectories of basilar-membrane peak velocity toward scala tympani. Low-frequency phase trajectories undergo a polarity flip in a region, 6.5-9 mm from the cochlear base, where traveling-wave phase velocity attains a local minimum and a local maximum and where the onset latencies of near-threshold impulse responses computed from responses to near-threshold white noise exhibit a local minimum. That region is the same where frequency-threshold tuning curves of ANFs undergo a shape transition. Since depolarization of IHCs presumably indicates the mechanical stimulus to their stereocilia, the present results suggest that distinct low-frequency forward waves of organ of Corti vibration are launched simultaneously at the extreme base of the cochlea and at the 6.5-9 mm transition region, from where antiphasic reflections arise.
Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.
Full Text Available Categorization enables listeners to efficiently encode and respond to auditory stimuli. Behavioral evidence for auditory categorization has been well documented across a broad range of human and non-human animal species. Moreover, neural correlates of auditory categorization have been documented in a variety of different brain regions in the ventral auditory pathway, which is thought to underlie auditory-object processing and auditory perception. Here, we review and discuss how neural representations of auditory categories are transformed across different scales of neural organization in the ventral auditory pathway: from across different brain areas to within local microcircuits. We propose different neural transformations across different scales of neural organization in auditory categorization. Along the ascending auditory system in the ventral pathway, there is a progression in the encoding of categories from simple acoustic categories to categories for abstract information. On the other hand, in local microcircuits, different classes of neurons differentially compute categorical information.
Walters, Bradley J; Zuo, Jian
Genetic mouse models provide invaluable tools for discerning gene function in vivo. Tetracycline-inducible systems (Tet-On/Off) provide temporal and cell-type specific control of gene expression, offering an alternative or even complementary approach to existing Cre/LoxP systems. Here we characterized a Sox10(rtTA/+) knock-in mouse line which demonstrates inducible reverse tetracycline trans-activator (rtTA) activity and Tet-On transgene expression in the inner ear following induction with the tetracycline derivative doxycycline (Dox). These Sox10(rtTA/+) mice do not exhibit any readily observable developmental or hearing phenotypes, and actively drive Tet-On transgene expression in Sox10 expressing cells in the inner ear. Sox10(rtTA/+) activity was revealed by multiple Tet-On reporters to be nearly ubiquitous throughout the membranous labyrinth of the developing inner ear, and notably absent from hair cells, tympanic border cells, and ganglion neurons following postnatal Dox inductions. Interestingly, Dox-induced Sox10(rtTA/+) activity declined with induction age, where Tet-On reporters became uninducible in adult cochlear epithelium. Co-administration of the loop diuretic furosemide was able to rescue Dox-induced reporter expression, though this method also caused significant cochlear hair cell loss. Surprisingly, Sox10(rtTA/+) driven reporter expression in the cochlea persists for at least 54 days after cessation of neonatal induction, presumably due to the persistence of Dox within inner ear tissues. These findings highlight the utility of the Sox10(rtTA/+) mouse line as a powerful tool for functional genetic studies of the auditory and balance organs in vivo, but also reveal some important considerations that must be adequately controlled for in future studies that rely upon Tet-On/Off systems.
Hall, J; Hubbard, A; Neely, S; Tubis, A
How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft . Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...
Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.
Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923
Robert, D.; Göpfert, M. C.
Evidence is presented that hearing in some insects is an active process. Audition in mosquitoes is used for mate-detection and is supported by antennal receivers, whose sound-induced vibrations are transduced by Johnston's organs. Each of these sensory organs contains ca. 15,000 sensory neurons. As shown by mechanical analysis, a physiologically vulnerable mechanism is at work that nonlinearly enhances the sensitivity and frequency selectivity of antennal hearing. This process of amplification correlates with the electrical activity of the auditory mechanoreceptor units in Johnston's organ.
... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.
... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...
Hackett, Troy A; Rinaldi Barkat, Tania; O'Brien, Barbara M J
The mouse sensory neocortex is reported to lack several hallmark features of topographic organization such as ocular dominance and orientation columns in primary visual cortex or fine-scale tonotopy in primary auditory cortex (AI). Here, we re-examined the question of auditory functional topography...... by aligning ultra-dense receptive field maps from the auditory cortex and thalamus of the mouse in vivo with the neural circuitry contained in the auditory thalamocortical slice in vitro. We observed precisely organized tonotopic maps of best frequency (BF) in the middle layers of AI and the anterior auditory...... field as well as in the ventral and medial divisions of the medial geniculate body (MGBv and MGBm, respectively). Tracer injections into distinct zones of the BF map in AI retrogradely labeled topographically organized MGBv projections and weaker, mixed projections from MGBm. Stimulating MGBv along...
Full Text Available Episodic memory or the ability to store context-rich information about everyday events depends on the hippocampal formation (entorhinal cortex, subiculum, presubiculum, parasubiculum, hippocampus proper, and dentate gyrus. A substantial amount of behavioral-lesion and anatomical studies have contributed to our understanding of the organization of how visual stimuli are retained in episodic memory. However, whether auditory memory is organized similarly is still unclear. One hypothesis is that, like the ‘visual ventral stream’ for which the connections of the inferior temporal gyrus with the perirhinal cortex are necessary for visual recognition in monkeys, direct connections between the auditory association areas of the superior temporal gyrus and the hippocampal formation and with the parahippocampal region (temporal pole, perhirinal, and posterior parahippocampal cortices might also underlie recognition memory for sounds. Alternatively, the anatomical organization of memory could be different in audition. This alternative ‘indirect stream’ hypothesis posits that, unlike the visual association cortex, the majority of auditory association cortex makes one or more synapses in intermediate, polymodal areas, where they may integrate information from other sensory modalities, before reaching the medial temporal memory system. This review considers anatomical studies that can support either one or both hypotheses – focusing on anatomical studies on the primate brain that have reported not only direct auditory association connections with medial temporal areas, but, importantly, also possible indirect pathways for auditory information to reach the medial temporal lobe memory system.
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
Hall, M.; Smeele, P.M.T.; Kuhl, P.K.
The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual
Coffman, Brian A; Haigh, Sarah M; Murphy, Timothy K; Leiter-Mcbeth, Justin; Salisbury, Dean F
Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination). Both ASP and N2 modulation are impaired in long-term schizophrenia. To determine whether these deficits are present early in disease course, we compared ASP and N2 modulation between individuals at their first episode of psychosis within the schizophrenia spectrum (FE, N=20) and matched healthy controls (N=24). The ASP was reduced by >40% in FE; however, N2 modulation was not statistically different from HC. This suggests that auditory segmentation (ASP) deficits exist at this early stage of schizophrenia, but auditory edge detection (N2 modulation) is relatively intact. In a subset of subjects for whom structural MRIs were available (N=14 per group), ASP sources were localized to midcingulate cortex (MCC) and temporal auditory cortex. Neurophysiological activity in FE was reduced in MCC, an area linked to aberrant perceptual organization, negative symptoms, and cognitive dysfunction in schizophrenia, but not temporal auditory cortex. This study supports the validity of the ASP for measurement of auditory object segmentation and suggests that the ASP may be useful as an early index of schizophrenia-related MCC dysfunction. Further, ASP deficits may serve as a viable biomarker of disease presence. Copyright © 2017 Elsevier B.V. All rights reserved.
Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.
Eric Olivier Boyer
Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.
Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee
... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
David L Woods
Full Text Available While auditory cortex in non-human primates has been subdivided into multiple functionally-specialized auditory cortical fields (ACFs, the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and nonattended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to nonattended sounds. Three centrally-located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally-defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.
Turner, M A; Bandelow, S; Edwards, L; Patel, P; Martin, H J; Wilson, I D; Thomas, C L P
This study sought to identify if detectable changes in human breath profiles may be observed following a psychological intervention designed to induce stress, a paced auditory serial addition test (PASAT). Breath samples were collected from 22 participants (10 male and 12 female) following a double cross-over randomized design with two experimental interventions. One intervention required participants to listen to classical music chosen to be neutral. The other intervention required participants to undertake a PASAT that induced cardiovascular responses consistent with acute stress. Both interventions also involved two sequences of cognitive function tests. Blood-pressure and heart-rate were recorded throughout each intervention and distal breath samples were collected onto Tenax® TA/Carbograph 1 thermal desorption tubes, using an adaptive breath sampler. Samples were collected before and after the PASAT. Breath samples were analysed by thermal desorption gas chromatography-mass spectrometry. Data registration using retention indexing and peak deconvolution followed by partial least-squares discriminant analysis identified six stress sensitive compounds. A principal components analysis model based on these components generated a model that predicted post-PASAT versus post-neutral intervention samples with a sensitivity of 83.3% and a selectivity of 91.6% for females, compared to 100% sensitivity and 90% selectivity for males. Of the six compounds indole, 2-hydroxy-1-phenylethanone, benzaldehyde, and 2-ethylhexan-1-ol were identified on the basis of mass spectral, retention indexing and confirmation against pure standards. 2-methylpentadecane was tentatively identified from mass spectral and retention indexing, whilst one component has yet to be assigned, although the mass spectrum is indicative of a terpene. Indole and 2-methylpentadecane concentrations increased in response to the PASAT intervention, while the other compounds reduced in their abundance in human
Alexandra P. Key
Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.
Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).
Kaya, Emine Merve; Elhilali, Mounya
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Mann, Philip H.; Suiter, Patricia A.
This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…
Sussman, Elyse S.
Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..
Meinke, Deanna K; Murphy, William J; Finan, Donald S; Lankford, James E; Flamme, Gregory A; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W
To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter's left ear. All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage.
Mohammad hosein Hekmat Ara
Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.
Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.
The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671
Skoe, Erika; Kraus, Nina
Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...
Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis
Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.
Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis
Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261
Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424
Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.
Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular
The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
J Gordon Millichap
Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.
Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610
Mylius, Judith; Brosch, Michael; Scheich, Henning; Budinger, Eike
By means of the Golgi-Cox and Nissl methods we investigated the cyto- and fiberarchitecture as well as the morphology of neurons in the subcortical auditory structures of the Mongolian gerbil (Meriones unguiculatus), a frequently used animal model in auditory neuroscience. We describe the divisions and subdivisions of the auditory thalamus including the medial geniculate body, suprageniculate nucleus, and reticular thalamic nucleus, as well as of the inferior colliculi, nuclei of the lateral lemniscus, superior olivary complex, and cochlear nuclear complex. In this study, we 1) confirm previous results about the organization of the gerbil's subcortical auditory pathway using other anatomical staining methods (e.g., Budinger et al.  Eur J Neurosci 12:2452-2474); 2) add substantially to the knowledge about the laminar and cellular organization of the gerbil's subcortical auditory structures, in particular about the orientation of their fibrodendritic laminae and about the morphology of their most distinctive neuron types; and 3) demonstrate that the cellular organization of these structures, as seen by the Golgi technique, corresponds generally to that of other mammalian species, in particular to that of rodents. Copyright © 2012 Wiley Periodicals, Inc.
Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies
The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kroon, Steven; Ramekers, Dyan; Smeets, Emma M; Hendriksen, Ferry G J; Klis, Sjaak F L; Versnel, Huib
Damage to and loss of the organ of Corti leads to secondary degeneration of the spiral ganglion cell (SGC) somata of the auditory nerve. Extensively examined in animal models, this degeneration process of SGC somata following deafening is well known. However, degeneration of auditory nerve axons,
A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Jackson, Thomas E; Sandramouli, Soupramanien
Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...
Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.
Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.
Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali
Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.
Duncan, Jeremy S; Fritzsch, Bernd
We review the molecular basis of auditory development and evolution. We propose that the auditory periphery (basilar papilla, organ of Corti) evolved by transforming a newly created and redundant vestibular (gravistatic) endorgan into a sensory epithelium that could respond to sound instead of gravity. Evolution altered this new epithelia's mechanoreceptive properties through changes of hair cells, positioned the epithelium in a unique position near perilymphatic space to extract sound moving between the round and the oval window, and transformed its otolith covering into a tympanic membrane. Another important step in the evolution of an auditory system was the evolution of a unique set of "auditory neurons" that apparently evolved from vestibular neurons. Evolution of mammalian auditory (spiral ganglion) neurons coincides with GATA3 being a transcription factor found selectively in the auditory afferents. For the auditory information to be processed, the CNS required a dedicated center for auditory processing, the auditory nuclei. It is not known whether the auditory nucleus is ontogenetically related to the vestibular or electroreceptive nuclei, two sensory systems found in aquatic but not in amniotic vertebrates, or a de-novo formation of the rhombic lip in line with other novel hindbrain structures such as pontine nuclei. Like other novel hindbrain structures, the auditory nuclei express exclusively the bHLH gene Atoh1, and loss of Atoh1 results in loss of most of this nucleus in mice. Only after the basilar papilla, organ of Corti evolved could efferent neurons begin to modulate their activity. These auditory efferents most likely evolved from vestibular efferent neurons already present. The most simplistic interpretation of available data suggest that the ear, sensory neurons, auditory nucleus, and efferent neurons have been transformed by altering the developmental genetic modules necessary for their development into a novel direction conducive for sound
Carrat, R; Thillier, J L; Durivault, J
The liminal auditory threshold for white noise and for coloured noise was determined from a statistical survey of a group of 21 young people with normal hearing. The normal auditory threshold for white noise with a spectrum covering the whole of the auditory field is between -- 0.57 dB +/- 8.78. The normal auditory threshold for bands of filtered white noise (coloured noise with a central frequency corresponding to the pure frequencies usually employed in tonal audiometry) describes a typical curve which, instead of being homothetic to the usual tonal curves, sinks to low frequencies and then rises. The peak of this curve is replaced by a broad plateau ranging from 750 to 6000 Hz and contained in the concavity of the liminal tonal curves. The ear is therefore less sensitive but, at limited acoustic pressure, white noise first impinges with the same discrimination upon the whole of the conversational zone of the auditory field. Discovery of the audiometric threshold for white noise constitutes a synthetic method of measuring acuteness of hearing which considerably reduces the amount of manipulation required.
Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.
The article summarizes information on assistive devices (hearing aids, cochlear implants, tactile aids, visual aids) and rehabilitation procedures (auditory training, speechreading, cued speech, and speech production) to aid the auditory learning of the hearing impaired.(DB)
Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.
Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals
Lunney, David; Morrison, Robert C.
Our research group has been working for several years on the development of auditory alternatives to visual graphs, primarily in order to give blind science students and scientists access to instrumental measurements. In the course of this work we have tried several modes for auditory presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music, and various non-musical sounds. Our most successful translation of data into sound has been presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two compounds are visibly different, their musical patterns will be audibly different. Other possibilities for auditory presentation of data are also described, among them listening to Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).
Chen, Sufen; Sussman, Elyse S.
The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958
Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.
Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A John; van Ee, Raymond
After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.
Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde
Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.
Blom, Jan Dirk; Sommer, Iris E. C.
Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an
Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor
Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)
... Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick Statistics About Voice, Speech, Language Speech and Language Developmental Milestones What Is Voice? What Is Speech? What Is Language? ... communication provides better outcomes for children with cochlear implants University of Texas at Dallas ...
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Razak, Khaleel A.; Fuzessery, Zoltan M.
A consistent organizational feature of auditory cortex is a clustered representation of binaural properties. Here we address two questions. What is the intrinsic organization of binaural clusters and to what extent does intracortical processing contribute to binaural representation. We address these issues in the auditory cortex of the pallid bat. The pallid bat listens to prey-generated noise transients to localize and hunt terrestrial prey. As in other species studied, binaural clusters are...
Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia
In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zmigrod, Sharon; Hommel, Bernhard
The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.
Skoe, E; Krizman, J; Spitzer, E; Kraus, N
To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Frey, Aline; Aramaki, Mitsuko; Besson, Mireille
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. Copyright © 2013 Elsevier Inc. All rights reserved.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.
Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta
Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.
and Piercy, M. (1973). Defects of non - verbal auditory perception in children with developmental aphasia . Nature (London), 241, 468-469. Watson, C.S...LII, zS 4p ETV I Hearing and Communication Laboratory Department of Speech and Hearing Sciences 7 Indiana University Bloomington, Indiana 47405 Final...Technical Report Air Force Office of Scientific Research AFOSR-84-0337 September 1, 1984 to August 31, 1987 Hearing and Communication Laboratory
In this article, an account is given on the author's experience with auditory based neuropsychology in a clinical, neurosurgical setting. The patients that were included in the studies are patients with traumatic or vascular brain lesions, patients undergoing brain surgery to alleviate symptoms of Parkinson's disease, or patients harbouring an intracranial arachnoid cyst affecting the temporal or the frontal lobe. The aims of these investigations were to collect information about the location of cognitive processes in the human brain, or to disclose dyscognition in patients with an arachnoid cyst. All the patients were tested with the DL technique. In addition, the cyst patients were subjected to a number of non-auditory, standard neuropsychological tests, such as Benton Visual Retention Test, Street Gestalt Test, Stroop Test and Trails Test A and B. The neuropsychological tests revealed that arachnoid cysts in general cause dyscognition that also includes auditory processes, and more importantly, that these cognition deficits normalise after surgical removal of the cyst. These observations constitute strong evidence in favour of surgical decompression.
Schwartz, Marc S; Wilkinson, Eric P
Auditory brainstem implants (ABIs), which have previously been used to restore auditory perception to deaf patients with neurofibromatosis type 2 (NF2), are now being utilized in other situations, including treatment of congenitally deaf children with cochlear malformations or cochlear nerve deficiencies. Concurrent with this expansion of indications, the number of centers placing and expressing interest in placing ABIs has proliferated. Because ABI placement involves posterior fossa craniotomy in order to access the site of implantation on the cochlear nucleus complex of the brainstem and is not without significant risk, we aim to highlight issues important in developing and maintaining successful ABI programs that would be in the best interests of patients. Especially with pediatric patients, the ultimate benefits of implantation will be known only after years of growth and development. These benefits have yet to be fully elucidated and continue to be an area of controversy. The limited number of publications in this area were reviewed. Review of the current literature was performed. Disease processes, risk/benefit analyses, degrees of evidence, and U.S. Food and Drug Administration approvals differ among various categories of patients in whom auditory brainstem implantation could be considered for use. We suggest sets of criteria necessary for the development of successful and sustaining ABI programs, including programs for NF2 patients, postlingually deafened adult nonneurofibromatosis type 2 patients, and congenitally deaf pediatric patients. Laryngoscope, 127:1909-1915, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Focuses on "organizers," tools or techniques that provide identification and classification along with possible relationships or connections among ideas, concepts, and issues. Discusses David Ausubel's research and ideas concerning advance organizers; the implications of Ausubel's theory to curriculum and teaching; "webbing," a…
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Auditory perception or hearing can be defined as the interpretation of sensory evidence, produced by the ears in response to sound, in terms of the events that caused the sound. We do not hear a window but we may hear a window closing. We do not hear a dog but we may hear a dog barking. And we do not hear a person but we may hear a person talking. Hearing impairment can result in anxiety or stress in everyday life. Pure-tone hearing loss (or threshold shift) is a measure of hearing impairment. Aging and excessive noise are the main causes of hearing impairment. Speech perception is another concept. The difference with the former is best illustrated by the disabled individual declaring "I can hear that someone is talking to me, but I don't understand what she says". Being unable to understand easily and clearly significant others, especially in understanding speech in a noisy environment, can give rise to considerable psychosocial and professional consequences (disability). Presbycusis is the decline in hearing sensitivity caused by the aging process at different levels of the auditory system. However, it is difficult to isolate age effects from other contributors to age-related hearing loss such as noise damage, genetic susceptibility, inflammatory otologic disorders, and ototoxic agents. Therefore, presbycusis and age-related hearing loss are often used synonymously. In this report pathophysiology is mostly described with regard to presbycusis, and the main peripheral types of presbycusis (sensory or Corti organ-related, strial, and neural) are summarized. An original experimental model of strial presbycusis, based on chronic application of furosemide at the round window, is further described. Central presbycusis is mainly determined by degeneration secondary to peripheral impairment (concept of deafferentation). Central auditory changes typically affect speed of processing and result in poorer speech understanding in noise or with rapid or degraded speech. Last
Vinish Agarwal; Saurabh Varshney; Sampan Singh Bist; Sanjiv Bhagat; Sarita Mishra; Vivek Jha
Auditory neuropathy (AN)/auditory dyssynchrony (AD) is a very often missed diagnosis, hence an underdiagnosed condition in clinical practice. Auditory neuropathy is a condition in which patients, on audiologic evaluation, are found to have normal outer hair cell function and abnormal neural function at the level of the eighth nerve. These patients, on clinical testing, are found to have normal otoacoustic emissions, whereas auditory brainstem response audiometry reveals the absence of neural ...
Mizrahi, Adi; Shalev, Amos; Nelken, Israel
The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
David L Woods
Full Text Available BACKGROUND: While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs, little is known about how processing in these fields is modulated by other acoustic features or by attention. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging (fMRI and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses
Kottler, Sylvia B.
Procedures and sample activities are provided for both identifying and training children with auditory perception problems related to sound localization, sound discrimination, and sound sequencing. (KW)
.... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...
Ian T. Zajac
Full Text Available This study examined whether the broad ability general speediness (Gs could be measured via the auditory modality. Existing and purpose-developed auditory tasks that maintained the cognitive requirements of established visually presented Gs markers were completed by 96 university undergraduates. Exploratory and confirmatory factor analyses showed that the auditory tasks combined with established visual measures to define latent Gs and reaction time factors. These findings provide preliminary evidence that suggests that if auditory tasks are developed that maintain the same cognitive requirements as existing visual measures, then they are likely to index similar cognitive processes.
Pillai, Roshni; Yathiraj, Asha
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Hatch, Mary Jo
Most of us recognize that organizations are everywhere. You meet them on every street corner in the form of families and shops, study in them, work for them, buy from them, pay taxes to them. But have you given much thought to where they came from, what they are today, and what they might become...... and considers many more. Mary Jo Hatch introduces the concept of organizations by presenting definitions and ideas drawn from the a variety of subject areas including the physical sciences, economics, sociology, psychology, anthropology, literature, and the visual and performing arts. Drawing on examples from...... prehistory and everyday life, from the animal kingdom as well as from business, government, and other formal organizations, Hatch provides a lively and thought provoking introduction to the process of organization....
Ellen de Wit
Presentatie CPLOL congres Florence In this systematic review, six electronic databases were searched for peer-reviewed studies using the key words auditory processing, auditory diseases, central [Mesh], and auditory perceptual. Two reviewers independently assessed relevant studies by inclusion
.... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...
Daliri, Ayoub; Max, Ludo
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre
Stebbings, Kevin A; Lesicko, Alexandria M H; Llano, Daniel A
We live in a world imbued with a rich mixture of complex sounds. Successful acoustic communication requires the ability to extract meaning from those sounds, even when degraded. One strategy used by the auditory system is to harness high-level contextual cues to modulate the perception of incoming sounds. An ideal substrate for this process is the massive set of top-down projections emanating from virtually every level of the auditory system. In this review, we provide a molecular and circuit-level description of one of the largest of these pathways: the auditory corticocollicular pathway. While its functional role remains to be fully elucidated, activation of this projection system can rapidly and profoundly change the tuning of neurons in the inferior colliculus. Several specific issues are reviewed. First, we describe the complex heterogeneous anatomical organization of the corticocollicular pathway, with particular emphasis on the topography of the pathway. We also review the laminar origin of the corticocollicular projection and discuss known physiological and morphological differences between subsets of corticocollicular cells. Finally, we discuss recent findings about the molecular micro-organization of the inferior colliculus and how it interfaces with corticocollicular termination patterns. Given the assortment of molecular tools now available to the investigator, it is hoped that his review will help guide future research on the role of this pathway in normal hearing. Copyright © 2014 Elsevier B.V. All rights reserved.
Lee, Christopher S; Todd, Neil P McAngus
The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language acquisition and processing. Several questions remain, however, as to what exactly characterizes the rhythmic differences, how they are manifested at an auditory/acoustic level and how listeners, whether adult native speakers or young infants, process rhythmic information. In this paper it is proposed that the crucial determinant of rhythmic organization is the variability in the auditory prominence of phonetic events. In order to test this auditory prominence hypothesis, an auditory model is run on two multi-language data-sets, the first consisting of matched pairs of English and French sentences, and the second consisting of French, Italian, English and Dutch sentences. The model is based on a theory of the auditory primal sketch, and generates a primitive representation of an acoustic signal (the rhythmogram) which yields a crude segmentation of the speech signal and assigns prominence values to the obtained sequence of events. Its performance is compared with that of several recently proposed phonetic measures of vocalic and consonantal variability.
Tallal, Paula; Gaab, Nadine
Children with language-learning impairments (LLI) form a heterogeneous population with the majority having both spoken and written language deficits as well as sensorimotor deficits, specifically those related to dynamic processing. Research has focused on whether or not sensorimotor deficits, specifically auditory spectrotemporal processing deficits, cause phonological deficit, leading to language and reading impairments. New trends aimed at resolving this question include prospective longitudinal studies of genetically at-risk infants, electrophysiological and neuroimaging studies, and studies aimed at evaluating the effects of auditory training (including musical training) on brain organization for language. Better understanding of the origins of developmental LLI will advance our understanding of the neurobiological mechanisms underlying individual differences in language development and lead to more effective educational and intervention strategies. This review is part of the INMED/TINS special issue "Nature and nurture in brain development and neurological disorders", based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).
I am pleased to present the 15th International Conference on Auditory Display (ICAD), which takes place in Copenhagen, Denmark, May 18-21, 2009. The ICAD 2009 theme is Timeless Sound, including the universal aspect of sounds as well as the influence of time in the perception of sounds. ICAD 2009...... with the re-new festival. The conference addresses all aspects related to the design of sounds, either conceptual or technical. Besides traditionally topics addressed by ICAD, I would like to take the opportunity of ICAD being organized by re-new to highlight the ICAD 2009 theme Timeless Sound......, and the possibilities of a full week of artistic presentations, including installations, concerts and much more. The joint organisation of CMMR with ICAD offers a great opportunity to discuss the links between auditory display, sound modeling and music information retrieval. ...
Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.
Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…
Keilmann, A; Läßig, A K; Nospes, S
The definition of an auditory processing disorder (APD) is based on impairments of auditory functions. APDs are disturbances in processes central to hearing that cannot be explained by comorbidities such as attention deficit or language comprehension disorders. Symptoms include difficulties in differentiation and identification of changes in time, structure, frequency and intensity of sounds; problems with sound localization and lateralization, as well as poor speech comprehension in adverse listening environments and dichotic situations. According to the German definition of APD (as opposed to central auditory processing disorder, CAPD), peripheral hearing loss or cognitive impairment also exclude APD. The diagnostic methodology comprises auditory function tests and the required diagnosis of exclusion. APD is diagnosed if a patient's performance is two standard deviations below the normal mean in at least two areas of auditory processing. The treatment approach for an APD depends on the patient's particular deficits. Training, compensatory strategies and improvement of the listening conditions can all be effective.
Maier, Joost X; Ghazanfar, Asif A
Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.
Okulate, G T; Jones, O B E
Although auditory hallucinations are universal phenomena, they show cultural and ethnic variation. We set out to study some differences between auditory hallucinations in Nigerian patients and their foreign counterparts. We also investigated the usefulness of auditory hallucinations in distinguishing between schizophrenia and affective disorders. A semi-structured interview was used to obtain information from 89 patients with auditory hallucinations who met ICD-10 criteria for either schizophrenia or affective psychoses and 10 others with organic mental disorders. Responses were compared with respect to the frequency, form and content of the hallucinatory voices as well as the languages spoken. In this sample, voices speaking exclusively in a foreign language were uncommon. Voices commanding and those discussing patients in the third person were the commonest in schizophrenic patients but not as frequent as in a similar group of patients in the UK studied by other authors. In patients with schizophrenia, voices were more likely to discuss the patient, whereas in affective disorders, voices were more likely to evoke fear, and patients were more likely to carry out commands. In conclusion, only three features of auditory hallucinations distinguished between schizophrenic and affective psychoses patients. Auditory hallucinations may be less harassing in Nigerian schizophrenic patients than in their UK counterparts. These hallucinations are most often perceived in the individual's mother tongue, with or without additional use of English, even when the patients have been 'westernized' through education and religion.
Beebe, Nichole L; Schofield, Brett R
Perineuronal nets (PNs) are aggregates of extracellular matrix molecules that surround some neurons in the brain. While PNs occur widely across many cortical areas, subcortical PNs are especially associated with motor and auditory systems. The auditory system has recently been suggested as an ideal model system for studying PNs and their functions. However, descriptions of PNs in subcortical auditory areas vary, and it is unclear whether the variation reflects species differences or differences in staining techniques. Here, we used two staining techniques (one lectin stain and one antibody stain) to examine PN distribution in the subcortical auditory system of four different species: guinea pigs (Cavia porcellus), mice (Mus musculus, CBA/CaJ strain), Long-Evans rats (Rattus norvegicus), and naked mole-rats (Heterocephalus glaber). We found that some auditory nuclei exhibit dramatic differences in PN distribution among species while other nuclei have consistent PN distributions. We also found that PNs exhibit molecular heterogeneity, and can stain with either marker individually or with both. PNs within a given nucleus can be heterogeneous or homogenous in their staining patterns. We compared PN staining across the frequency axes of tonotopically organized nuclei and among species with different hearing ranges. PNs were distributed non-uniformly across some nuclei, but only rarely did this appear related to the tonotopic axis. PNs were prominent in all four species; we found no systematic relationship between the hearing range and the number, staining patterns or distribution of PNs in the auditory nuclei. © 2017 Wiley Periodicals, Inc.
Mittal, Rahul; Debs, Luca H; Nguyen, Desiree; Patel, Amit P; Grati, M'hamed; Mittal, Jeenu; Yan, Denise; Eshraghi, Adrien A; Liu, Xue Zhong
Ear is a sensitive organ involved in hearing and balance function. The complex signaling network in the auditory system plays a crucial role in maintaining normal physiological function of the ear. The inner ear comprises a variety of host signaling pathways working in synergy to deliver clear sensory messages. Any disruption, as minor as it can be, has the potential to affect this finely tuned system with temporary or permanent sequelae including vestibular deficits and hearing loss. Mutations linked to auditory symptoms, whether inherited or acquired, are being actively researched for ways to reverse, silence, or suppress them. In this article, we discuss recent advancements in understanding the pathways involved in auditory system signaling, from hair cell development through transmission to cortical centers. Our review discusses Notch and Wnt signaling, cell to cell communication through connexin and pannexin channels, and the detrimental effects of reactive oxygen species on the auditory system. There has been an increased interest in the auditory community to explore the signaling system in the ear for hair cell regeneration. Understanding signaling pathways in the auditory system will pave the way for the novel avenues to regenerate sensory hair cells and restore hearing function. J. Cell. Physiol. 232: 2710-2721, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tatiane Faria Barrozo
Full Text Available ABSTRACT INTRODUCTION: Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. OBJECTIVE: To study phonological measures and (central auditory processing of children with speech sound disorder. METHODS: Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. RESULTS: The group with (central auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. CONCLUSION: The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder.
Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas
The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428
Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Jill B Firszt
Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type
Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa
A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.
Reser, D H; Fishman, Y I; Arezzo, J C; Steinschneider, M
The functional organization of primary auditory cortex in non-primates is generally modeled as a tonotopic gradient with an orthogonal representation of independently mapped binaural interaction columns along the isofrequency contours. Little information is available regarding the validity of this model in the primate brain, despite the importance of binaural cues for sound localization and auditory scene analysis. Binaural and monaural responses of A1 to pure tone stimulation were studied using auditory evoked potentials, current source density and multiunit activity. Key findings include: (i) differential distribution of binaural responses with respect to best frequency, such that 74% of the sites exhibiting binaural summation had best frequencies below 2000 Hz; (ii) the pattern of binaural responses was variable with respect to cortical depth, with binaural summation often observed in the supragranular laminae of sites showing binaural suppression in thalamorecipient laminae; and (iii) dissociation of binaural responses between the initial and sustained action potential firing of neuronal ensembles in A1. These data support earlier findings regarding the temporal and spatial complexity of responses in A1 in the awake state, and are inconsistent with a simple orthogonal arrangement of binaural interaction columns and best frequency in A1 of the awake primate.
Otophysine fishes have a series of bones, the Weberian ossicles, which acoustically couple the swimbladder to the inner ear. These fishes have evolved a diversity of sound-generating organs and acoustic signals, although some species, such as the goldfish, are not known to be vocal. Utilizing a recently developed auditory brainstem response (ABR)-recording technique, the auditory sensitivities of representatives of seven families from all four otophysine orders were investigated and compared to the spectral content of their vocalizations. All species examined detect tone bursts from 100 Hz to 5 kHz, but ABR-audiograms revealed major differences in auditory sensitivities, especially at higher frequencies (>1 kHz) where thresholds differed by up to 50 dB. These differences showed no apparent correspondence to the ability to produce sounds (vocal versus non-vocal species) or to the spectral content of species-specific sounds. All fishes have maximum sensitivity between 400 Hz and 1,500 Hz, whereas the major portion of the energy of acoustic signals was in the frequency range of 100-400 Hz (swimbladder drumming sounds) and of 1-3 kHz (stridulatory sounds). Species producing stridulatory sounds exhibited better high-frequency hearing sensitivity (pimelodids, doradids), except for callichthyids, which had poorest hearing ability in this range. Furthermore, fishes emitting both low- and high-frequency sounds, such as pimelodid and doradid catfishes, did not possess two corresponding auditory sensitivity maxima. Based on these results it is concluded that selective pressures involved in the evolution of the Weberian apparatus and the design of vocal signals in otophysines were others (primarily predator or prey detection in quiet freshwater habitats) than those serving to optimize acoustical communication.
Georg von Jonquieres
Full Text Available Canavan Disease (CD is a leukodystrophy caused by homozygous null mutations in the gene encoding aspartoacylase (ASPA. ASPA-deficiency is characterized by severe psychomotor retardation, and excessive levels of the ASPA substrate N-acetylaspartate (NAA. ASPA is an oligodendrocyte marker and it is believed that CD has a central etiology. However, ASPA is also expressed by Schwann cells and ASPA-deficiency in the periphery might therefore contribute to the complex CD pathology. In this study, we assessed peripheral and central auditory function in the AspalacZ/lacZ rodent model of CD using auditory brainstem response (ABR. Increased ABR thresholds and the virtual loss of waveform peaks 4 and 5 from AspalacZ/lacZ mice, indicated altered central auditory processing in mutant mice compared with Aspawt/wt controls and altered central auditory processing. Analysis of ABR latencies recorded from AspalacZ/lacZ mice revealed that the speed of nerve conduction was unchanged in the peripheral part of the auditory pathway, and impaired in the CNS. Histological analyses confirmed that ASPA was expressed in oligodendrocytes and Schwann cells of the auditory system. In keeping with our physiological results, the cellular organization of the cochlea, including the organ of Corti, was preserved and the spiral ganglion nerve fibres were normal in ASPA-deficient mice. In contrast, we detected substantial hypomyelination in the central auditory system of AspalacZ/lacZ mice. In summary, our data suggest that the lack of ASPA in the CNS is responsible for the observed hearing deficits, while ASPA-deficiency in the cochlear nerve fibres is tolerated both morphologically and functionally.
von Jonquieres, Georg; Froud, Kristina E; Klugmann, Claudia B; Wong, Ann C Y; Housley, Gary D; Klugmann, Matthias
Canavan Disease (CD) is a leukodystrophy caused by homozygous null mutations in the gene encoding aspartoacylase (ASPA). ASPA-deficiency is characterized by severe psychomotor retardation, and excessive levels of the ASPA substrate N-acetylaspartate (NAA). ASPA is an oligodendrocyte marker and it is believed that CD has a central etiology. However, ASPA is also expressed by Schwann cells and ASPA-deficiency in the periphery might therefore contribute to the complex CD pathology. In this study, we assessed peripheral and central auditory function in the AspalacZ/lacZ rodent model of CD using auditory brainstem response (ABR). Increased ABR thresholds and the virtual loss of waveform peaks 4 and 5 from AspalacZ/lacZ mice, indicated altered central auditory processing in mutant mice compared with Aspawt/wt controls and altered central auditory processing. Analysis of ABR latencies recorded from AspalacZ/lacZ mice revealed that the speed of nerve conduction was unchanged in the peripheral part of the auditory pathway, and impaired in the CNS. Histological analyses confirmed that ASPA was expressed in oligodendrocytes and Schwann cells of the auditory system. In keeping with our physiological results, the cellular organization of the cochlea, including the organ of Corti, was preserved and the spiral ganglion nerve fibres were normal in ASPA-deficient mice. In contrast, we detected substantial hypomyelination in the central auditory system of AspalacZ/lacZ mice. In summary, our data suggest that the lack of ASPA in the CNS is responsible for the observed hearing deficits, while ASPA-deficiency in the cochlear nerve fibres is tolerated both morphologically and functionally.
Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
A driving simulator was used to compare the effectiveness of increasing intensity (looming) auditory warning signals with other types of auditory warnings. Auditory warnings have been shown to speed driver reaction time in rear-end collision situations; however, it is not clear which type of signal is the most effective. Although verbal and symbolic (e.g., a car horn) warnings have faster response times than abstract warnings, they often lead to more response errors. Participants (N=20) experienced four nonlooming auditory warnings (constant intensity, pulsed, ramped, and car horn), three looming auditory warnings ("veridical," "early," and "late"), and a no-warning condition. In 80% of the trials, warnings were activated when a critical response was required, and in 20% of the trials, the warnings were false alarms. For the early (late) looming warnings, the rate of change of intensity signaled a time to collision (TTC) that was shorter (longer) than the actual TTC. Veridical looming and car horn warnings had significantly faster brake reaction times (BRT) compared with the other nonlooming warnings (by 80 to 160 ms). However, the number of braking responses in false alarm conditions was significantly greater for the car horn. BRT increased significantly and systematically as the TTC signaled by the looming warning was changed from early to veridical to late. Looming auditory warnings produce the best combination of response speed and accuracy. The results indicate that looming auditory warnings can be used to effectively warn a driver about an impending collision.
Hou, Yanlian; Xiao, Xiaoyan; Ren, Jianmin; Wang, Yajuan; Zhao, Faming
More attention has recently been focused on auditory impairment of young type 1 diabetics. This study aimed to evaluate auditory function of young type 1 diabetics and the correlation between clinical indexes and hearing impairment. We evaluated the auditory function of 50 type 1 diabetics and 50 healthy subjects. Clinical indexes were measured along with analyzing their relation of auditory function. Type 1 diabetic patients demonstrated a deficit with elevated thresholds at right ear and left ear when compared to healthy controls (p p V and interwave I-V) and left ear (wave III, V and interwave I-III, I-V) in diabetic group significantly increased compared to those in control subjects (p p p p p <0.01). Type 1 diabetics exerted higher auditory threshold, slower auditory conduction time and cochlear impairment. HDL-cholesterol, diabetes duration, systemic blood pressure, microalbuminuria, GHbA1C, triglyceride, and age may affect the auditory function of type 1 diabetics. Copyright © 2015 IMSS. Published by Elsevier Inc. All rights reserved.
Gutschalk, Alexander; Dykstra, Andrew R
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Ying, Jun; Zhou, Dan; Lin, Ke; Gao, Xiaorong
The auditory steady-state response (ASSR) may reflect activity from different regions of the brain. Particularly, it was reported that the gamma-band ASSR plays an important role in working memory, speech understanding, and recognition. Traditionally, the ASSR has been determined by power spectral density analysis, which cannot detect the exact overall distributed properties of the ASSR. Functional network analysis has recently been applied in electroencephalography studies. Previous studies on resting or working state found a small-world organization of the brain network. Some researchers have studied dysfunctional networks caused by diseases. The present study investigates the brain connection networks of schizophrenia patients with auditory hallucinations during an ASSR task. A directed transfer function is utilized to estimate the brain connectivity patterns. Moreover, the structures of brain networks are analyzed by converting the connectivity matrices into graphs. It is found that for normal subjects, network connections are mainly distributed at the central and frontal-temporal regions. This indicates that the central regions act as transmission hubs of information under ASSR stimulation. For patients, network connections seem unordered. The finding that the path length was larger in patients compared to that in normal subjects under most thresholds provides insight into the structures of connectivity patterns. The results suggest that there are more synchronous oscillations that cover a long distance on the cortex but a less efficient network for patients with auditory hallucinations.
Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Plakke, Bethany; Romanski, Lizabeth M.
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Francis T. Pleban
Full Text Available A review study was conducted to examine the adverse effects of styrene, styrene mixtures, or styrene and/or styrene mixtures and noise on the auditory system in humans employed in occupational settings. The search included peer-reviewed articles published in English language involving human volunteers spanning a 25-year period (1990â2015. Studies included peer review journals, caseâcontrol studies, and case reports. Animal studies were excluded. An initial search identified 40 studies. After screening for inclusion, 13 studies were retrieved for full journal detail examination and review. As a whole, the results range from no to mild associations between styrene exposure and auditory dysfunction, noting relatively small sample sizes. However, four studies investigating styrene with other organic solvent mixtures and noise suggested combined exposures to both styrene organic solvent mixtures may be more ototoxic than exposure to noise alone. There is little literature examining the effect of styrene on auditory functioning in humans. Nonetheless, findings suggest public health professionals and policy makers should be made aware of the future research needs pertaining to hearing impairment and ototoxicity from styrene. It is recommended that chronic styrene-exposed individuals be routinely evaluated with a comprehensive audiological test battery to detect early signs of auditory dysfunction. Keywords: auditory system, human exposure, ototoxicity, styrene
Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.
Atagi, Eriko; Bent, Tessa
Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task-a task in which listeners freely group talkers based on audio samples-has been a useful tool for examining listeners' representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers' native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented-such as the listeners' attention to the talkers' native language and the variability of stimulus intelligibility-can influence listeners' perceptual organization of nonnative speech.
Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.
Lotfi, Yones; Moosavi, Abdollah; Abdollahi, Farzaneh Zamiri; BAKHSHI, Enayatollah; Sadjedi, Hamed
Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim o...
Moore, David R.; Halliday, Lorna F.; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...
Ghuntla Tejas P.; Mehta Hemant B.; Gokhale Pradnya A.; Shah Chinmay J.
Reaction is purposeful voluntary response to different stimuli as visual or auditory stimuli. Auditory reaction time is time required to response to auditory stimuli. Quickness of response is very important in games like basketball. This study was conducted to compare auditory reaction time of basketball players and healthy controls. The auditory reaction time was measured by the reaction time instrument in healthy controls and basketball players. Simple reaction time and choice reaction time...
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Brian N Pasley
Full Text Available How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
... to the inner row of hair cells or synapses between the inner hair cells and the auditory ... any other nerve-related problems. Ongoing speech and language testing . A child with ANSD needs regular visits ...
The present research proposes that the presence of auditory feedback increases satisfaction with the shopping experience, confidence in the retailer, and the likelihood to return to the retailer...
Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...
Daalman, K.; Diederen, K. M. J.; Derks, E. M.; van Lutterveld, R.; Kahn, R. S.; Sommer, Iris E. C.
Background. Hallucinations have consistently been associated with traumatic experiences during childhood. This association appears strongest between physical and sexual abuse and auditory verbal hallucinations (AVH). It remains unclear whether traumatic experiences mainly colour the content of AVH
Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.
Vitor E. Valenti
Full Text Available Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation between noise intensity and vagal-sympathetic balance. Additionally, it was reported that music therapy improved heart rate variability in anthracycline-treated breast cancer patients. It was hypothesized that dopamine release in the striatal system induced by pleasurable songs is involved in cardiac autonomic regulation. Musical auditory stimulation influences heart rate variability through a neural mechanism that is not well understood. Further studies are necessary to develop new therapies to treat cardiovascular disorders.
Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert...
Zatorre, Robert J; Halpern, Andrea R
Most people intuitively understand what it means to "hear a tune in your head." Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.
Ozdemir, Süleyman; Kıroğlu, Mete; Tuncer, Ulkü; Sahin, Rasim; Tarkan, Ozgür; Sürmelioğlu, Ozgür
The aim of this study was to analyze the auditory performance development of cochlear implanted patients. The effects of age at implantation, gender, implanted ear and model of the cochlear implant on the patients' auditory performance were investigated. Twenty-eight patients (12 boys, 16 girls) with congenital prelingual hearing loss who underwent cochlear implant surgery at our clinic and a follow-up of at least 18 months were selected for the study. Listening Progress Profile (LiP), Monosyllable-Trochee-Polysyllable (MTP) and Meaningful Auditory Integration Scale (MAIS) tests were performed to analyze the auditory performances of the patients. To determine the effect of the age at implantation on the auditory performance, patients were assigned into two groups: group 1 (implantation age = or <60 months, mean 44.8 months) and group 2 (implantation age = or <60 months, mean 100.6 months). Group 2 had higher preoperative test scores than group 1 but after cochlear implant use, the auditory performance levels of the patients in group 1 improved faster and equalized to those of the patients in group 2 after 12-18 months. Our data showed that variables such as sex, implanted ear or model of the cochlear implant did not have any statistically significant effect on the auditory performance of the patients after cochlear implantation. We found a negative correlation between the implantation age and the auditory performance improvement in our study. We observed that children implanted at young age had a quicker language development and have had more success in reading, writing and other educational skills in the future.
Scherberich, Jan; Hummel, Jennifer; Schöneich, Stefan; Nowotny, Manuela
From mammals to insects, acoustic communication is in many species crucial for successful reproduction. In the duetting bushcricket Ancylecha fenestrata, the mutual acoustic communication between males and females is asymmetrical. We investigated how those signalling disparities are reflected by sexual dimorphism of their ears. Both sexes have tympanic ears in their forelegs, but male ears possess a significantly longer crista acustica containing 35% more scolopidia. With more sensory cells to cover a similar hearing range, the male hearing organ shows a significantly expanded auditory fovea that is tuned to the dominant frequency of the female reply to facilitate phonotactic mate finding. This sex-specific auditory fovea is demonstrated in the mechanical and neuronal responses along the tonotopically organized crista acustica by laservibrometric and electrophysiological frequency mapping, respectively. Morphometric analysis of the crista acustica revealed an interrupted gradient in organ height solely within this auditory fovea region, whereas all other anatomical parameters decrease continuously from proximal to distal. Combining behavioural, anatomical, biomechanical and neurophysiological information, we demonstrate evidence of a pronounced auditory fovea as a sex-specific adaptation of an insect hearing organ for intraspecific acoustic communication. © 2017 The Author(s).
Karina S Cramer
Full Text Available Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of specialized auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes, and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits.
Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya
The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex. © EEG and Clinical Neuroscience Society (ECNS) 2014.
Ali Akbar Tahaei
Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.
Emine Merve Kaya
Full Text Available Bottom-up attention is a sensory-driven selection mechanism that directs perception towards a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global contrast is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes.
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Cosma, I.; Popescu, D. I.
For hearing sense, the mechanoreceptors fire action potentials when their membranes are physically stretched. Based on the statistical physics, we analyzed the entropical aspects in auditory processes of hearing. We develop a model that connects the logarithm of relative intensity of sound (loudness) to the level of energy disorder within the system of cellular sensory system. The increasing of entropy and disorder in the system is connected to the free energy available to signal the production of action potentials in inner hair cells of the vestibulocochlear auditory organ.
Hur, Joon Ho; Kim, Jae Kyun; Seo, Gi Young; Choi, Woo Sun; Byun, Jun Soo; Lee, Woong Jae; Lee, Tae Jin [Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul (Korea, Republic of); Kim, Na Ra [Dept. of Radiology, Samsung Medical Center, Sungkyunkwan University College of Medicine, Seoul (Korea, Republic of)
Juvenile xanthogranuloma (JXG) is a benign, spontaneously regressing lesion that usually occurs during the first year of life, but may also occur in adulthood. Although the most common presentation of JXG is the cutaneous lesion, it can also manifest in various visceral organs. JXG of the external auditory canal is extremely rare, and there have been only a few reports of those cases in the English literature. In this study, we present a case of pathologically proven JXG that occurred in the external auditory canal with a symptomatic clinical presentation.
Stekelenburg, J.J.; Vroomen, J.
The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one’s own motor actions. Auditory potentials
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Castillo, E; Carricondo, F; Bartolomé, M V; Vicente-Torres, A; Poch Broto, J; Gil-Loyzaga, P
Presbycusis is a progressive hearing impairment associated with aging, characterized by hearing loss and a degeneration of cochlear structures. In this paper we analyze the effects of aging on the auditory system of C57/BL6J mice, with electrophysiological and morphological studies. With this aim the auditory potentials of mice aging 1, 3, 6, 9, 12, 15, 18, 21 and 24 months were recorded, and then the morphology of the cochleal were analyzed. Auditory potentials revealed an increase in wave latencies, as well as a decrease in their amplitudes during aging. Morphological results showed a total Corti's organ degeneration, being replaced by a flat epithelial layer, and a total absence of hair cells.
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
King, A J
The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs.
Aline Albuquerque Morais
Full Text Available Auditory training (AT has been used for auditory rehabilitation in elderly individuals and is an effective tool for optimizing speech processing in this population. However, it is necessary to distinguish training-related improvements from placebo and test-retest effects. Thus, we investigated the efficacy of short-term auditory training (acoustically controlled auditory training - ACAT in elderly subjects through behavioral measures and P300. Sixteen elderly individuals with APD received an initial evaluation (evaluation 1 - E1 consisting of behavioral and electrophysiological tests (P300 evoked by tone burst and speech sounds to evaluate their auditory processing. The individuals were divided into two groups. The Active Control Group [ACG (n=8] underwent placebo training. The Passive Control Group [PCG (n=8] did not receive any intervention. After 12 weeks, the subjects were revaluated (evaluation 2 - E2. Then, all of the subjects underwent ACAT. Following another 12 weeks (8 training sessions, they underwent the final evaluation (evaluation 3 – E3. There was no significant difference between E1 and E2 in the behavioral test [F(9.6=0,.6 p=0.92, λ de Wilks=0.65] or P300 [F(8.7=2.11, p=0.17, λ de Wilks=0.29] (discarding the presence of placebo effects and test-retest. A significant improvement was observed between the pre- and post-ACAT conditions (E2 and E3 for all auditory skills according to the behavioral methods [F(4.27=0.18, p=0.94, λ de Wilks=0.97]. However, the same result was not observed for P300 in any condition. There was no significant difference between P300 stimuli. The ACAT improved the behavioral performance of the elderly for all auditory skills and was an effective method for hearing rehabilitation.
Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.
Full Text Available What do listeners know about sounds that have a systematic organization? Research suggests that listeners store absolute pitch information as part of their representations for specific auditory experiences. It is unclear however, if such knowledge is abstracted beyond these experiences. In two studies we examined this question via a tone adjustment task in which listeners heard one of several target tones to be matched by adjusting the frequency of a subsequent starting tone. In the first experiment listeners estimated tones from one of three distributions differing in frequency range. The effect of tone matching in the three different distributions was then modeled using randomly generated data (RGD to ascertain the degree to which individuals’ estimates are affected by generalized note knowledge. Results showed that while listeners’ estimates were similar to the RGD, indicating a central tendency effect reflective of the target tone distribution, listeners were more accurate than the RGD indicating that their estimates were affected by generalized note knowledge. The second experiment tested three groups of listeners who vary in the nature of their note knowledge. Specifically, absolute pitch (AP possessors, non-AP listeners matched in musical expertise (ME, and non-AP musical novices (MN adjusted tones from a micro-scale that included only two in-tune notes (B4 and C5. While tone estimates for all groups showed a central tendency effect reflective of the target tone distribution, each groups’ estimates were more accurate than the RGD, indicating all listeners’ estimates were guided by generalized note knowledge. Further, there was evidence that explicit note knowledge additionally influenced AP possessors’ tone estimates, as tones closer to C5 had less error. Results indicate that everyday listeners possess generalized note knowledge that influences the perception of isolated tones and that this effect is made more evident with
Murphy, Cristina F B; Pagan-Neves, Luciana O; Wertzner, Haydée F; Schochat, Eliane
This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder (SSD). A total of 17 children, aged 7-12 years, with SSD were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2) or phonological intervention group (n = 7, average age 8.6 ± 1.2). The intervention outcomes included auditory-sensory measures (auditory temporal processing skills) and cognitive measures (attention, short-term memory, speech production, and phonological awareness skills). The auditory approach focused on non-linguistic auditory training (e.g., backward masking and frequency discrimination), whereas the phonological approach focused on speech sound training (e.g., phonological organization and awareness). Both interventions consisted of 12 45-min sessions delivered twice per week, for a total of 9 h. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.
Brown, Rachel M.; Palmer, Caroline
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of
Brown, Rachel M; Palmer, Caroline
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of
Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.
Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei
The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions. © 2013 Japan Geriatrics Society.
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples
Shannon L. M. Heald
Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we
Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.
Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.
Full Text Available The prevalence of acquired hearing loss is very high. About 10% of the total population and more than one third of the population over 65 years suffer from debilitating hearing loss. The most common type of hearing loss in adults is idiopathic sudden sensorineural hearing loss (ISSHL. In the majority of cases, ISSHL is permanent and typically associated with loss of sensory hair cells in the organ of Corti. Following the loss of sensory hair cells, the auditory neurons undergo secondary degeneration. Sensory hair cells and auditory neurons do not regenerate throughout life, and loss of these cells is irreversible and cumulative. However, recent advances in stem cell biology have gained hope that stem cell therapy comes closer to regenerating sensory hair cells in humans. A major advance in the prospects for the use of stem cells to restore normal hearing comes with the recent discovery that hair cells can be generated ex vivo from embryonic stem (ES cells, adult inner ear stem cells and neural stem cells. Furthermore, there is increasing evidence that stem cells can promote damaged cell repair in part by secreting diffusible molecules such as growth factors. These results suggest that stem-cell-based treatment regimens can be applicable to the damaged inner ear as future clinical applications.Previously we have established an animal model of cochlear ischemia in gerbils and showed progressive hair cell loss up to 4 days after ischemia. Auditory brain stem response (ABR recordings have demonstrated that this gerbil model displays severe deafness just after cochlear ischemia and gradually recovers thereafter. These pathological findings and clinical manifestations are reminiscent of ISSHL in humans. In this study, we have shown the effectiveness of stem cell therapy by using this animal model of ISSHL.
Frizzo, Ana Claudia Figueiredo
Full Text Available Introduction: This is an objective laboratory assessment of the central auditory systems of children with learning disabilities. Aim: To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities. Methods: This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8-13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded. Results and Conclusions: The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8-32.3 ms, Pa = 19.0-51.4 ms, Nb = 30.0-64.3 ms (learning disorders group and Na = 13.2-29.6 ms, Pa = 21.8-42.8 ms, Nb = 28.4-65.8 ms (healthy group. The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group or 0.2-3.6 ìV (learning disorders group. Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders.
Elbert, Sarah P.; Dijkstra, Arie
Persuasive health messages can be presented through an auditory channel, thereby enhancing the salience of the source, making it fundamentally different from written or pictorial information. We focused on the determinants of perceived source reliability in auditory health persuasion by
Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P
Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.
Elbert, Sarah; Dijkstra, Arie
In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood
Christiansen, Simon Krogholt
The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...
Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik
Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results...... from previous studies suggests the filter bandwidth keeps decreasing below 100 Hz, although at a relatively lower rate than at higher frequencies. Main characteristics of the auditory filter were studied from below 100 Hz up to 1000 Hz. Center frequencies evaluated were 50, 63, 125, 250, 500, and 1000...... Hz. The notched-noise method was used, with the noise masker at 40 dB spectral density. A rounded exponential auditory filter model (roex(p,r)) was used to fit the masking data. Preliminary data on 1 subject is discussed. Considering the system as a whole (e.g. without removing the assumed middle...
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits.
Rachel M. Brown
Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the
Shiller, Douglas M.; Rochon, Marie-Lyne
Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it...
Favrot, Sylvain Emmanuel
A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control...... of the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been developed...
Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio
The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.
Hirohito M. Kondo
Full Text Available The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and report their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues, we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.
Auditory stream segregation is an important paradigm in the study of auditory scene analysis. Performance-based measures of auditory stream segregation have received increasing use as a complement to subjective reports of streaming. For example, the sensitivity in discriminating a temporal shift imposed on one B tone in an ABA sequence consisting of A and B tones that differ in frequency is often used to infer the perceptual organization (one stream vs. two streams). Limitations of these measures are discussed here, and an alternative measure based on the combination of decision weights and sensitivity is suggested. In the experiment, for ABA and ABB sequences varying in tempo (fast/slow) and duration (long/short), the sensitivity (d') in the temporal shift discrimination task did not differ between fast and slow sequences, despite strong differences in perceptual organization. The decision weights assigned to within-stream and between-stream interonset intervals also deviated from the idealized pattern of near-exclusive reliance on between-stream information in the subjectively integrated case, and on within-stream information in the subjectively segregated case. However, an estimate of internal noise computed using a combination of the estimated decision weights and sensitivity differentiated between sequences that were predominantly perceived as integrated or segregated, with significantly higher internal noise estimates for the segregated case. Therefore, the method of using a combination of decision weights and sensitivity provides a measure of auditory stream segregation that overcomes some of the limitations of purely sensitivity-based measures.
McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten
Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented...... as a gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....
Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher
Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017, 10: 1857–1865....
Blattner, Meera M.
In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.
Grimes, A M; Elks, M L; Grunberger, G; Pikus, A M
We studied three patients with adrenomyeloneuropathy. Complete audiologic assessment was obtained: two patients showed unimpaired peripheral hearing and one showed a mild high-frequency hearing loss. Auditory brain-stem responses were abnormal in both ears of all subjects, with one subject showing no response above wave I, and the other two having significant wave I to III and wave III to V interval prolongations. We concluded that auditory brain-stem response testing provides a simple, valid, reliable method for demonstrating neurologic abnormality in adrenomyeloneuropathy even prior to evidence of clinical signs.
Maculewicz, Justyna; Jylhä, Antti; Serafin, Stefania
We present an interactive auditory display for walking with sinusoidal tones or ecological, physically-based synthetic walking sounds. The feedback is either step-based or rhythmic, with constant or adaptive tempo. In a tempo-following experiment, we investigate different interaction modes...... and auditory feedback, based on the MSE between the target and performed tempo, and the stability of the latter. The results indicate that the MSE with ecological sounds is comparable to that with the sinusoidal tones, yet ecological sounds are considered more natural. Adaptive conditions result in stable...
BOLHUIS, JJ; VANKAMPEN, HS
The characteristics of auditory learning in filial imprinting in precocial birds are reviewed. Numerous studies have demonstrated that the addition of an auditory stimulus improves following of a visual stimulus. This paper evaluates whether there is genuine auditory imprinting, i.e. the formation
Lucker, Jay R.
Many children with problems learning in school can have educational deficits due to underlying auditory processing disorders (APD). For these children, they can be identified as having auditory learning disabilities. Furthermore, auditory learning disabilities is identified as a specific learning disability (SLD) in the IDEA. Educators and…
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Auditory impedance tester. 874.1090 Section 874...) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Diagnostic Devices § 874.1090 Auditory impedance tester. (a) Identification. An auditory impedance tester is a device that is intended to change the air pressure in the...
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…
... COMMISSION 47 CFR Part 15 Definition of Part 15 Auditory Assistance Device AGENCY: Federal Communications Commission. ACTION: Proposed rule. SUMMARY: This document proposes to amend the definition of ``auditory... definition restricts the use of part 15 auditory assistance devices that operate in the 72.0-73.0 MHz, 74.6...
Gavin M. Bidelman
Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.
Bidelman, Gavin M.
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294
Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio
The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔFTONE, TONE condition) but also in the amplitude modulation rate ("AM cue": ΔFAM, AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔFAM and ΔFTONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Szalárdy, Orsolya; Bendixen, Alexandra; Böhm, Tamás M; Davies, Lucy A; Denham, Susan L; Winkler, István
While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.
Froemke, Robert C; Martins, Ana Raquel O
The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. Copyright © 2011 Elsevier B.V. All rights reserved.
Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali
To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251
Large, Edward W; Almonte, Felix V
Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition. © 2012 New York Academy of Sciences.
Nyrop, Mette; Grøntved, Aksel
OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000. M...
A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…
Full Text Available A method of reconstructing perceived or imagined music by analyzing brain activity has not yet been established. As a first step toward developing such a method, we aimed to reconstruct the imagery of rhythm, which is one element of music. It has been reported that a periodic electroencephalogram (EEG response is elicited while a human imagines a binary or ternary meter on a musical beat. However, it is not clear whether or not brain activity synchronizes with fully imagined beat and meter without auditory stimuli. To investigate neural entrainment to imagined rhythm during auditory imagery of beat and meter, we recorded EEG while nine participants (eight males and one female imagined three types of rhythm without auditory stimuli but with visual timing, and then we analyzed the amplitude spectra of the EEG. We also recorded EEG while the participants only gazed at the visual timing as a control condition to confirm the visual effect. Furthermore, we derived features of the EEG using canonical correlation analysis (CCA and conducted an experiment to individually classify the three types of imagined rhythm from the EEG. The results showed that classification accuracies exceeded the chance level in all participants. These results suggest that auditory imagery of meter elicits a periodic EEG response that changes at the imagined beat and meter frequency even in the fully imagined conditions. This study represents the first step toward the realization of a method for reconstructing the imagined music from brain activity.
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In
Norrix, Linda W.; Velenovsky, David S.
Purpose: Auditory neuropathy spectrum disorder, or ANSD, can be a confusing diagnosis to physicians, clinicians, those diagnosed, and parents of children diagnosed with the condition. The purpose of this review is to provide the reader with an understanding of the disorder, the limitations in current tools to determine site(s) of lesion, and…
Elbert, Sarah; Dijkstra, Arie
Persuasive health information can be presented through an auditory channel. Curiously enough, the effect of voice cues in health persuasion has hardly been studied. Research concerning visual persuasive messages showed that self-affirmation results in a more open-minded reaction to threatening
Tervaniemi, Mari; Hugdahl, Kenneth
In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding.
the target sound in time determine whether or not across-frequency modulation effects are observed. The results suggest that the binding of sound elements into coherent auditory objects precedes aspects of modulation analysis and imply a cortical locus involving integration times of several hundred...
Idiazábal-Aletxa, M A; Saperas-Rodríguez, M
Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason. The diagnosis in made when language development is out of keeping with other aspects of development, and possible explanatory causes have been excluded. During the last years neurosciences have approached to the study of SLI. The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. It has been proposed that SLI is the consequence of low-level abnormalities in auditory perception. Too, children with SLI show a specific deficit in automatic discrimination of syllables. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. There is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area because of the heterogeneity in the population and because insufficient attention has been paid to maturational aspects of auditory processing.
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer's disease (AD). We developed an auditory naming task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the auditory naming task. This task was also more difficult than two versions of a comparable visual naming task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal participants. Nonetheless, our auditory naming task may prove useful in research and clinical practice, especially with visually impaired patients.
Snyder, Joel S.; Carter, Olivia L.; Lee, Suh-Kyung; Hannon, Erin E.; Alain, Claude
The authors examined the effect of preceding context on auditory stream segregation. Low tones (A), high tones (B), and silences (-) were presented in an ABA-pattern. Participants indicated whether they perceived 1 or 2 streams of tones. The A tone frequency was fixed, and the B tone was the same as the A tone or had 1 of 3 higher frequencies.…
Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.
Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…
Schmidt-Kassow, Maren; Thöne, Katharina; Kaiser, Jochen
Recent studies have shown that moving in synchrony with auditory stimuli boosts attention allocation and verbal learning. Furthermore rhythmic tones are processed more efficiently than temporally random tones ('timing effect'), and this effect is increased when participants actively synchronize their motor performance with the rhythm of the tones, resulting in auditory-motor synchronization. Here, we investigated whether this applies also to sequences of linguistic stimuli (syllables). We compared temporally irregular syllable sequences with two temporally regular conditions where either the interval between syllable onsets (stimulus onset asynchrony, SOA) or the interval between the syllables' vowel onsets was kept constant. Entrainment to the stimulus presentation frequency (1 Hz) and event-related potentials were assessed in 24 adults who were instructed to detect pre-defined deviant syllables while they either pedaled or sat still on a stationary exercise bike. We found larger 1 Hz entrainment and P300 amplitudes for the SOA presentation during motor activity. Furthermore, the magnitude of the P300 component correlated with the motor variability in the SOA condition and 1 Hz entrainment, while in turn 1 Hz entrainment correlated with auditory-motor synchronization performance. These findings demonstrate that acute auditory-motor coupling facilitates phonetic encoding. Copyright © 2017 Elsevier B.V. All rights reserved.
Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich
Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.
Strait, Dana L.; Kraus, Nina
Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583
5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.
Wolak, Tomasz; Cieśla, Katarzyna; Lorens, Artur; Kochanek, Krzysztof; Lewandowska, Monika; Rusiniak, Mateusz; Pluta, Agnieszka; Wójcik, Joanna; Skarżyński, Henryk
enlargement and reduction of the extent of responses in the topically organized auditory cortex. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Hari M Bharadwaj
Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory
Agnew, Z K; McGettigan, C; Banks, B; Scott, S K
Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior-posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. Copyright © 2012 Elsevier Inc. All rights reserved.
Vibhakar C Kotak
Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.
Kassuba, Tanja; Menz, Mareike M; R�der, Brigitte
they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG......, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent...
Lehmann, Gerlind U C; Berger, Sandra; Strauss, Johannes; Lehmann, Arne W; Pflüger, Hans-Joachim
Reduction of tympanal hearing organs is repeatedly found amongst insects and is associated with weakened selection for hearing. There is also an associated wing reduction, since flight is no longer required to evade bats. Wing reduction may also affect sound production. Here, the auditory system in four silent grasshopper species belonging to the Podismini is investigated. In this group, tympanal ears occur but sound signalling does not. The tympanal organs range from fully developed to remarkably reduced tympana. To evaluate the effects of tympanal regression on neuronal organisation and auditory sensitivity, the size of wings and tympana, sensory thresholds and sensory central projections are compared. Reduced tympanal size correlates with a higher auditory threshold. The threshold curves of all four species are tuned to low frequencies with a maximal sensitivity at 3-5 kHz. Central projections of the tympanal nerve show characteristics known from fully tympanate acridid species, so neural elements for tympanal hearing have been strongly conserved across these species. The results also confirm the correlation between reduction in auditory sensitivity and wing reduction. It is concluded that the auditory sensitivity of all four species may be maintained by stabilising selective forces, such as predation.
McCormick, Catherine A; Gallagher, Shannon; Cantu-Hertzler, Evan; Woodrick, Scarlet
The nucleus medialis is the main first-order target of the mechanosensory lateral line (LL) system. This report definitively demonstrates that mechanosensory LL inputs also terminate in the ipsilateral dorsal portion of the descending octaval nucleus (dDO) in the goldfish. The dDO, which is the main first-order auditory nucleus in bony fishes, includes neurons that receive direct input from the otolithic end organs of the inner ear and project to the auditory midbrain. There are two groups of such auditory projection neurons: medial and lateral. The medial and the lateral groups in turn contain several neuronal populations, each of which includes one or more morphological cell types. In goldfish, the exclusively mechanosensory anterior and posterior LL nerves terminate only on specific cell types of auditory projection neurons in the lateral dDO group. Single neurons in the lateral dDO group may receive input from both anterior and posterior LL nerves. It is possible that some of the lateral dDO neurons that receive LL input also receive input from one or more of the otolithic end organs. These results are consistent with functional studies demonstrating low frequency acoustic sensitivity of the mechanosensory LL in teleosts, and they reveal that the anatomical substrate for sensory integration of otolithic and LL inputs is present at the origin of the central ascending auditory pathway in an otophysine fish. © 2016 S. Karger AG, Basel.
Anttonen, Tommi; Belevich, Ilya; Laos, Maarja
Wound healing in the inner ear sensory epithelia is performed by the apical domains of supporting cells (SCs). Junctional F-actin belts of SCs are thin during development but become exceptionally thick during maturation. The functional significance of the thick belts is not fully understood. We h...
Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.
Anttonen, Tommi; Belevich, Ilya; Laos, Maarja
by contact-independent diffusible signals. In the search for regulators of wound healing, we inactivated RhoA in SCs, which, however, did not limit wound healing. RhoA inactivation in developing outer hair cells (OHCs) caused myosin II delocalization from the perijunctional domain and apical cell......-surface enlargement. These abnormalities led to the extrusion of OHCs from the epithelium. These results demonstrate the importance of stability of the apical domain, both in wound repair by SCs and in development of OHCs, and that only this latter function is regulated by RhoA. Because the correct cytoarchitecture...
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple single-peaked spectral tuning (grouped into two clusters), we observe that ∼60% of auditory populations are sensitive to multiple frequency bands. Specifically, we observe sensitivity to multiple frequency bands (1) at exactly one octave distance from each other, (2) at multiple harmonically related frequency intervals, and (3) with no apparent relationship to each other. We propose that beyond the well known cortical tonotopic organization, multipeaked spectral tuning amplifies selected combinations of frequency bands. Such selective amplification might serve to detect behaviorally relevant and complex sound features, aid in segregating auditory scenes, and explain prominent perceptual phenomena such as octave invariance.
Full Text Available Noise induced hearing loss (NIHL remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL and complex velocity level (CVL, which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL filter to obtain velocities of basilar membrane (BM in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.
Sun, Pengfei; Qin, Jun; Campbell, Kathleen
Noise induced hearing loss (NIHL) remains as a severe health problem worldwide. Existing noise metrics and modeling for evaluation of NIHL are limited on prediction of gradually developing NIHL (GDHL) caused by high-level occupational noise. In this study, we proposed two auditory fatigue based models, including equal velocity level (EVL) and complex velocity level (CVL), which combine the high-cycle fatigue theory with the mammalian auditory model, to predict GDHL. The mammalian auditory model is introduced by combining the transfer function of the external-middle ear and the triple-path nonlinear (TRNL) filter to obtain velocities of basilar membrane (BM) in cochlea. The high-cycle fatigue theory is based on the assumption that GDHL can be considered as a process of long-cycle mechanical fatigue failure of organ of Corti. Furthermore, a series of chinchilla experimental data are used to validate the effectiveness of the proposed fatigue models. The regression analysis results show that both proposed fatigue models have high corrections with four hearing loss indices. It indicates that the proposed models can accurately predict hearing loss in chinchilla. Results suggest that the CVL model is more accurate compared to the EVL model on prediction of the auditory risk of exposure to hazardous occupational noise.
Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.
This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.
The objective of this study was to determine whether intra-aural administration of aqueous solutions of marbofloxacin, gentamicin, tobramycin and ticarcillin (used off-licence) was associated with changes in hearing as measured by brainstem auditory evoked responses. Dogs diagnosed with otitis media (n=37) underwent brainstem auditory evoked response testing and then were treated for their ear disease. First, the external ear canal and middle ear were flushed with sterile saline followed by EDTA tris with 0·15% chlorhexidine. Then, a combination of aqueous antibiotic mixed with an aqueous solution of EDTA tris was instilled into the middle ear. Follow-up examinations were undertaken for each dog, and treatment was continued until there were no detected infectious organisms or inflammatory infiltrate. Brainstem auditory evoked response testing was repeated after resolution of the infection and discontinuation of therapy. Brainstem auditory evoked responses in dogs treated with aqueous solutions of marbofloxacin or gentamicin remained unchanged or improved after therapy of otitis media but were impaired in dogs treated with ticarcillin or tobramycin. If off-licence use of topical antibiotics is deemed necessary in cases of otitis media, aqueous solutions of marbofloxacin and gentamicin appear to be less ototoxic than aqueous solutions of ticarcillin or tobramycin. © 2017 British Small Animal Veterinary Association.
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Full Text Available It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013 to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds ( 50 ms did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that that the auditory system of audiometrically normal older listeners might not be ‘slower than normal’, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time.
Yager, D D
This paper provides an overview of insect peripheral auditory systems focusing on tympanate ears (pressure detectors) and emphasizing research during the last 15 years. The theme throughout is the evolution of hearing in insects. Ears have appeared independently no fewer than 19 times in the class Insecta and are located on various thoracic and abdominal body segments, on legs, on wings, and on mouth parts. All have fundamentally similar structures-a tympanum backed by a tracheal sac and a tympanal chordotonal organ-though they vary widely in size, ancillary structures, and number of chordotonal sensilla. Novel ears have recently been discovered in praying mantids, two families of beetles, and two families of flies. The tachinid flies are especially notable because they use a previously unknown mechanism for sound localization. Developmental and comparative studies have identified the evolutionary precursors of the tympanal chordotonal organs in several insects; they are uniformly chordotonal proprioceptors. Tympanate species fall into clusters determined by which of the embryologically defined chordotonal organ groups in each body segment served as precursor for the tympanal organ. This suggests that the many appearances of hearing could arise from changes in a small number of developmental modules. The nature of those developmental changes that lead to a functional insect ear is not yet known. Copyright 1999 Wiley-Liss, Inc.
Kössl, Manfred; Coro, Frank; Seyfarth, Ernst-August; Nässig, Wolfgang A
Sensitive hearing organs often employ nonlinear mechanical sound processing which produces distortion-product otoacoustic emissions. Such emissions are also recorded from insect tympanal organs. Here we report high frequency distortion-product emissions, evoked by stimulus frequencies up to 95 kHz, from the tympanal organ of a notodontid moth, Ptilodon cucullina, which contains only a single auditory receptor neuron. The 2f1-f2 distortion-product emission reaches sound levels above 40 dB SPL. Most emission growth functions show a prominent notch of 20 dB depth (n = 20 trials), accompanied by an average phase shift of 119 degrees , at stimulus levels between 60 and 70 dB SPL, which separates a low- and a high-level component. The emissions are vulnerable to topical application of ethyl ether which shifts growth functions by about 20 dB towards higher stimulus levels. For the mammalian cochlea, Lukashkin and colleagues have proposed that distinct level-dependent components of nonlinear amplification do not necessarily require interaction of several cellular sources but could be due to a single nonlinear source. In notodontids, such a physiologically vulnerable source could be the single receptor cell. Potential contributions from accessory cells to the nonlinear properties of the scolopidial hearing organ are still unclear.
Hurley, L.M.; Hall, I.C.
Context-dependent plasticity in auditory processing is achieved in part by physiological mechanisms that link behavioral state to neural responses to sound. The neuromodulator serotonin has many characteristics suitable for such a role. Serotonergic neurons are extrinsic to the auditory system but send projections to most auditory regions. These projections release serotonin during particular behavioral contexts. Heightened levels of behavioral arousal and specific extrinsic events, including stressful or social events, increase serotonin availability in the auditory system. Although the release of serotonin is likely to be relatively diffuse, highly specific effects of serotonin on auditory neural circuitry are achieved through the localization of serotonergic projections, and through a large array of receptor types that are expressed by specific subsets of auditory neurons. Through this array, serotonin enacts plasticity in auditory processing in multiple ways. Serotonin changes the responses of auditory neurons to input through the alteration of intrinsic and synaptic properties, and alters both short- and long-term forms of plasticity. The infrastructure of the serotonergic system itself is also plastic, responding to age and cochlear trauma. These diverse findings support a view of serotonin as a widespread mechanism for behaviorally relevant plasticity in the regulation of auditory processing. This view also accommodates models of how the same regulatory mechanism can have pathological consequences for auditory processing. PMID:21187135
Black, David; Hansen, Christian; Nabavi, Arya; Kikinis, Ron; Hahn, Horst
This article investigates the current state of the art of the use of auditory display in image-guided medical interventions. Auditory display is a means of conveying information using sound, and we review the use of this approach to support navigated interventions. We discuss the benefits and drawbacks of published systems and outline directions for future investigation. We undertook a review of scientific articles on the topic of auditory rendering in image-guided intervention. This includes methods for avoidance of risk structures and instrument placement and manipulation. The review did not include auditory display for status monitoring, for instance in anesthesia. We identified 15 publications in the course of the search. Most of the literature (60%) investigates the use of auditory display to convey distance of a tracked instrument to an object using proximity or safety margins. The remainder discuss continuous guidance for navigated instrument placement. Four of the articles present clinical evaluations, 11 present laboratory evaluations, and 3 present informal evaluation (2 present both laboratory and clinical evaluations). Auditory display is a growing field that has been largely neglected in research in image-guided intervention. Despite benefits of auditory displays reported in both the reviewed literature and non-medical fields, adoption in medicine has been slow. Future challenges include increasing interdisciplinary cooperation with auditory display investigators to develop more meaningful auditory display designs and comprehensive evaluations which target the benefits and drawbacks of auditory display in image guidance.
Brishna Soraya Kamal
Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.
Kostopoulos, Penelope; Petrides, Michael
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top–down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience. PMID:26831102
Shoumaker, R D; Ajax, E T; Schenkenberg, T
The selective inability to comprehend the spoken word, in the absence of aphasia or defective or defective hearing, is defined as pure word deafness (auditory verbal agnosia). Reported cases of this rare disorder have suggested the site of involvement to be strategically placed, interrupting fibers from left and right primary auditory receptive areas which project to Wernicke's are in the dominant hemisphere. Our patient is a 44-year-old male who suffered from an uncertain illness complicated by fever, jaundice and generalized seizures seven years previously. Following an apparent convulsion, the patient was noted to be unable to understand spoken language without loss of ability to recognize and respond to sounds or marked impairment of speech or reading. The evidence suggested bilateral cerebral hemisphere disease more marked on the right. The abrupt onset without progression is consistent with a vascular or ischemic etiology. Conclusions about the nature of the lesion and areas involved must await further studies and ultimately tissue examination.
Weinel, Jonathan; Cunningham, Stuart
In previous work the authors have proposed the concept of 'ASC Simulations': including audio-visual installations and experiences, as well as interactive video game systems, which simulate altered states of consciousness (ASCs) such as dreams and hallucinations. Building on the discussion...... of the authors' previous paper, where a large-scale qualitative study explored the changes to auditory perception that users of various intoxicating substances report, here the authors present three prototype audio mechanisms for simulating hallucinations in a video game. These were designed in the Unity video...... game engine as an early proof-of-concept. The first mechanism simulates 'selective auditory attention' to different sound sources, by attenuating the amplitude of unattended sources. The second simulates 'enhanced sounds', by adjusting perceived brightness through filtering. The third simulates...
Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.
Nielsen, Lars Bramsløw
An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...
Christensen-Dalsgaard, Jakob; Tang, Ye Zhong; Carr, Catherine E
in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (around 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted...... from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al., 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions......Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion...
Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O
In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.
Full Text Available For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1 to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF and posterior auditory field (PAF in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.
Albera, Roberto; Bin, Ilaria; Cena, Manuele; Dagna, Federico; Giordano, Pamela; Sammartano, Azia
Non-auditory effects of noise involve several systems and functions, the most important of which are the cardiovascular, the vestibular and the psychic. Although several studies correlated noise exposure to some pathologies, like hypertension and anxiety disorders, and recent analysis carried out on cavy explained part of their pathophysiology, their multiple causes and the variability of individual reactions are still important limits to their classification.
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku
The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.
Priyank S Chatra
Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.
Karla Maria Ibraim da Freiria Elias
Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.
Ernestus, Mirjam; Cutler, Anne
In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal
Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.
Full Text Available Recent work on the mechanisms underlying auditory verbal hallucination (AVH has been heavily informed by self-monitoring accounts that postulate defects in an internal monitoring mechanism as the basis of AVH. A more neglected alternative is an account focusing on defects in auditory processing, namely a spontaneous activation account of auditory activity underlying AVH. Science is often aided by putting theories in competition. Accordingly, a discussion that systematically contrasts the two models of AVH can generate sharper questions that will lead to new avenues of investigation. In this paper, we provide such a theoretical discussion of the two models, drawing strong contrasts between them. We identify a set of challenges for the self-monitoring account and argue that the spontaneous activation account has much in favor of it and should be the default account. Our theoretical overview leads to new questions and issues regarding the explanation of AVH as a subjective phenomenon and its neural basis. Accordingly, we suggest a set of experimental strategies to dissect the underlying mechanisms of AVH in light of the two competing models.
Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.
Christensen-Dalsgaard, Jakob; Tang, Yezhong; Carr, Catherine E
Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼ 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200-500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional.
Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara
Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Full Text Available Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM. First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Plack, Christopher J; Oxenham, Andrew J; Kreft, Heather A; Carlyon, Robert P
Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated) between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.
Christopher J Plack
Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.
Christensen-Dalsgaard, Jakob; Tang, Yezhong
Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200–500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional. PMID:21325679
Gottselig, J M; Hofer-Tinguely, G; Borbély, A A; Regel, S J; Landolt, H-P; Rétey, J V; Achermann, P
Sleep is superior to waking for promoting performance improvements between sessions of visual perceptual and motor learning tasks. Few studies have investigated possible effects of sleep on auditory learning. A key issue is whether sleep specifically promotes learning, or whether restful waking yields similar benefits. According to the "interference hypothesis," sleep facilitates learning because it prevents interference from ongoing sensory input, learning and other cognitive activities that normally occur during waking. We tested this hypothesis by comparing effects of sleep, busy waking (watching a film) and restful waking (lying in the dark) on auditory tone sequence learning. Consistent with recent findings for human language learning, we found that compared with busy waking, sleep between sessions of auditory tone sequence learning enhanced performance improvements. Restful waking provided similar benefits, as predicted based on the interference hypothesis. These findings indicate that physiological, behavioral and environmental conditions that accompany restful waking are sufficient to facilitate learning and may contribute to the facilitation of learning that occurs during sleep.
Cottrell, David; Campbell, Megan E J
When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.
Paetau, R; Kajola, M; Korkman, M; Hämäläinen, M; Granström, M L; Hari, R
The Landau-Kleffner syndrome (LKS) is characterized by electroencephalographic spike discharges and verbal auditory agnosia in previously healthy children. We recorded magnetoencephalographic (MEG) spikes in a patient with LKS, and compared their sources with anatomical information from magnetic resonance imaging. All spikes originated close to the left auditory cortex. The evoked responses were contaminated by spikes in the left auditory area and suppressed in the right--the latter responses recovered when the spikes disappeared. We suggest that unilateral discharges at or near the auditory cortex disrupt auditory discrimination in the affected hemisphere, and lead to suppression of auditory information from the opposite hemisphere, thereby accounting for the two main criteria of LKS.
Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella
MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.
Carmona, C; Casado, I; Fernández-Rojas, J; Garín, J; Rayo, J I
Verbal auditory agnosia are rare in clinical practice. Clinically, it characterized by impairment of comprehension and repetition of speech but reading, writing, and spontaneous speech are preserved. So it is distinguished from generalized auditory agnosia by the preserved ability to recognize non verbal sounds. We present the clinical picture of a forty-years-old, right handed woman who developed verbal auditory agnosic after an bilateral temporal ischemic infarcts due to atrial fibrillation by dilated cardiomyopathie. Neurophysiological studies by pure tone threshold audiometry: brainstem auditory evoked potentials and cortical auditory evoked potentials showed sparing of peripheral hearing and intact auditory pathway in brainstem but impaired cortical responses. Cranial CT-SCAN revealed two large hypodenses area involving both cortico-subcortical temporal lobes. Cerebral SPECT using 99mTc-HMPAO as radiotracer showed hypoperfusion just posterior in both frontal lobes nect to Roland's fissure and at level of bitemporal lobes just anterior to Sylvian's fissure.
Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M
Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.
Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.
Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.
Kraus, Nina; Strait, Dana L; Parbery-Clark, Alexandra
Musicians benefit from real-life advantages, such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians' auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. © 2012 New York Academy of Sciences.
Chen, Joyce L; Penhune, Virginia B; Zatorre, Robert J
Much is known about the motor system and its role in simple movement execution. However, little is understood about the neural systems underlying auditory-motor integration in the context of musical rhythm, or the enhanced ability of musicians to execute precisely timed sequences. Using functional magnetic resonance imaging, we investigated how performance and neural activity were modulated as musicians and nonmusicians tapped in synchrony with progressively more complex and less metrically structured auditory rhythms. A functionally connected network was implicated in extracting higher-order features of a rhythm's temporal structure, with the dorsal premotor cortex mediating these auditory-motor interactions. In contrast to past studies, musicians recruited the prefrontal cortex to a greater degree than nonmusicians, whereas secondary motor regions were recruited to the same extent. We argue that the superior ability of musicians to deconstruct and organize a rhythm's temporal structure relates to the greater involvement of the prefrontal cortex mediating working memory.
Boscariol M.; Andre K.D.; Feniman M.R.
Many children with auditory processing disorders have a high prevalence of otitis media, a middle ear alterations greatly prevalent in children with palatine and lip clefts. Aim: to check the performance of children with palate cleft alone (PC) in auditory processing tests. Prospective study. Materials and Methods: twenty children (7 to 11 years) with CP were submitted to sound location tests (SL), memory for verbal sounds (MSSV) and non verbal sounds in sequence (MSSNV), Revised auditory fus...
Rivero, Olga; Sanjuan, Julio; Aguilar, Eduardo Jesús; Gonzalez, José Carlos; Molto, María Dolores; de Frutos, Rosa; Najera, Carmen
To study the role of the serotonin transporter gene (SLC6A4) in the emotional processing of auditory hallucinations can be particularly important to better understand the pathophysiology of auditory hallucinations. Moreover, a poly-morphism located in this gene (5-HTTLPR) has been previously associated with different disorders related to altered emotional responses. The aim of this study was to evaluate the relationship between different polymorphisms of the SLC6A4 gene and different aspects of auditory hallucinations in schizophrenic patients, with a special consideration toward the emotional response to auditory hallucinations. Two samples of 224 patients with auditory hallucinations and 346 healthy subjects were studied. AH were assessed in patients through the PSYRATS scale for auditory hallucinations. Several polymorphisms located within the SLC6A4 gene were analysed through case-control comparisons as well as association analyses with different parameters of auditory hallucinations. No differences were found between patients and controls for any of the analysed polymorphisms (p > 0.05). However, the evaluation of auditory hallucinations parameters showed that the low expressing alleles of the 5-HTTLPR polymorphism were associated with higher levels of intensity of the distress caused by auditory hallucinations (p = 0.049 corrected for the item 'intensity of distress'). There was also a trend with the parameter disruption (p = 0.06 corrected). These two items of the PSYRATS scale are directly related to the emotional dimension of auditory hallucinations. In contrast, we did not observe any association with items related to other dimensions of auditory hallucinations. Our results support a possible role of the serotonin transporter in the emotional response to auditory hallucinations.
Farah, Rola; Schmithorst, Vincent J; Keith, Robert W; Holland, Scott K
The purpose of the present study was to identify biomarkers of listening difficulties by investigating white matter microstructure in children suspected of auditory processing disorder (APD) using diffusion tensor imaging (DTI). Behavioral studies have suggested that impaired cognitive and/or attention abilities rather than a pure sensory processing deficit underlie listening difficulties and auditory processing disorder (APD) in children. However, the neural signature of listening difficulties has not been investigated. Twelve children with listening difficulties and atypical left ear advantage (LEA) in dichotic listening and twelve age- and gender-matched typically developing children with typical right ear advantage (REA) were tested. Using voxel-based analysis, fractional anisotropy (FA), and mean, axial and radial diffusivity (MD, AD, RD) maps were computed and contrasted between the groups. Listening difficulties were associated with altered white matter microstructure, reflected by decreased FA in frontal multifocal white matter regions centered in prefrontal cortex bilaterally and left anterior cingulate. Increased RD and decreased AD accounted for the decreased FA, suggesting delayed myelination in frontal white matter tracts and disrupted fiber organization in the LEA group. Furthermore, listening difficulties were associated with increased MD (with increase in both RD and AD) in the posterior limb of the internal capsule (sublenticular part) at the auditory radiations where auditory input is transmitted between the thalamus and the auditory cortex. Our results provide direct evidence that listening difficulties in children are associated with altered white matter microstructure and that both sensory and supramodal deficits underlie the differences between the groups.
Wisniowiecki, Anna M.; Mattison, Scott P.; Kim, Sangmin; Riley, Bruce; Applegate, Brian E.
Zebrafish, an auditory specialist among fish, offer analogous auditory structures to vertebrates and is a model for hearing and deafness in vertebrates, including humans. Nevertheless, many questions remain on the basic mechanics of the auditory pathway. Phase-sensitive Optical Coherence Tomography has been proven as valuable technique for functional vibrometric measurements in the murine ear. Such measurements are key to building a complete understanding of auditory mechanics. The application of such techniques in the zebrafish is impeded by the high level of pigmentation, which develops superior to the transverse plane and envelops the auditory system superficially. A zebrafish double mutant for nacre and roy (mitfa-/- ;roya-/- [casper]), which exhibits defects for neural-crest derived melanocytes and iridophores, at all stages of development, is pursued to improve image quality and sensitivity for functional imaging. So far our investigations with the casper mutants have enabled the identification of the specialized hearing organs, fluid-filled canal connecting the ears, and sub-structures of the semicircular canals. In our previous work with wild-type zebrafish, we were only able to identify and observe stimulated vibration of the largest structures, specifically the anterior swim bladder and tripus ossicle, even among small, larval specimen, with fully developed inner ears. In conclusion, this genetic mutant will enable the study of the dynamics of the zebrafish ear from the early larval stages all the way into adulthood.
Pillion, Joseph P; Shiffler, Dorothy E; Hoon, Alexander H; Lin, Doris D M
To describe auditory function in an individual with bilateral damage to the temporal and parietal cortex. Case report. A previously healthy 17-year old male is described who sustained extensive cortical injury following an episode of viral meningoencephalitis. He developed status epilepticus and required intubation and multiple anticonvulsants. Serial brain MRIs showed bilateral temporoparietal signal changes reflecting extensive damage to language areas and the first transverse gyrus of Heschl on both sides. The patient was referred for assessment of auditory processing but was so severely impaired in speech processing that he was unable to complete any formal tests of his speech processing abilities. Audiological assessment utilizing objective measures of auditory function established the presence of normal peripheral auditory function and illustrates the importance of the use of objective measures of auditory function in patients with injuries to the auditory cortex. Use of objective measures of auditory function is essential in establishing the presence of normal peripheral auditory function in individuals with cortical damage who may not be able to cooperate sufficiently for assessment utilizing behavioral measures of auditory function.
’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent......The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects...
’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent......The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects...
Buchholz, Jörg; Kerketsos, P
filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...
Guthrie, Rachel M; Bryant, Richard A
.... The present study reports the first prospective psychophysiological investigation, to the authors' knowledge, of posttraumatic stress responses by prospectively evaluating the auditory startle...
Puvvada, Krishna C; Simon, Jonathan Z
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
Liebenthal, Einat; Möttönen, Riikka
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.
Engineer, C T; Centanni, T M; Im, K W; Borland, M S; Moreno, N A; Carraway, R S; Wilson, L G; Kilgard, M P
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. © 2014 Wiley Periodicals, Inc.
Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin
Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Population-wide inter-spike interval distributions are constructed by summing together intervals from the observed responses of many single Type I auditory nerve fibers. Features in such distributions correspond closely with pitches that are heard by human listeners. The most common all-order interval present in the auditory nerve array almost invariably corresponds to the pitch frequency, whereas the relative fraction of pitchrelated intervals amongst all others qualitatively corresponds to the strength of the pitch. Consequently, many diverse aspects of pitch perception are explained in terms of such temporal representations. Similar stimulus-driven temporal discharge patterns are observed in major neuronal populations of the cochlear nucleus. Population-interval distributions constitute an alternative time-domain strategy for representing sensory information that complements spatially organized sensory maps. Similar autocorrelation-like representations are possible in other sensory systems, in which neural discharges are time-locked to stimulus waveforms.
Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten
The perceptual organization of two-tone sequences into auditory streams was investigated using a modeling framework consisting of an auditory pre-processing front end [Dau et al., J. Acoust. Soc. Am. 102, 2892–2905 (1997)] combined with a temporal coherence-analysis back end [Elhilali et al......., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... study supported the hypothesis that forward masking enhances the ability to perceptually segregate spectrally close tone sequences. Furthermore, the modeling suggested that effects of neural adaptation and processing though modulation-frequency selective filters may enhance the sensitivity to onset...
Zeremdini, Jihen; Ben Messaoud, Mohamed Anouar; Bouzid, Aicha
Humans have the ability to easily separate a composed speech and to form perceptual representations of the constituent sources in an acoustic mixture thanks to their ears. Until recently, researchers attempt to build computer models of high-level functions of the auditory system. The problem of the composed speech segregation is still a very challenging problem for these researchers. In our case, we are interested in approaches that are addressed to the monaural speech segregation. For this purpose, we study in this paper the computational auditory scene analysis (CASA) to segregate speech from monaural mixtures. CASA is the reproduction of the source organization achieved by listeners. It is based on two main stages: segmentation and grouping. In this work, we have presented, and compared several studies that have used CASA for speech separation and recognition.
Crockett, D J; Hadjistavropoulos, T; Hurwitz, T
The present study examined the manifestation of the primacy and recency effects in patients with anterior brain damage, posterior brain damage, and psychiatric inpatients with no known organic impairment. All three groups of patients demonstrated both a primacy and a recency effect on the Rey Auditory Verbal Learning Test (RAVLT). Differences among the three groups with respect to the magnitude of primacy and recency as well as with other variables reflecting free recall were nonsignificant. These findings limit the use of primacy and recency for the differentiation of memory deficits due to organic and nonorganic causes.
Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar
Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.
Koefoed-Nielsen, Birger; Andersen, Svend Erik Søgaard
Over the last decade evidence on the existence of auditory processing disorder (APD) has increased. Therefore, it is now time to deal with the phenomenon in daily clinical work. This article gives information about APD, especially about problems with the definition of APD, diagnosing APD and the treatment.
Favrot, Sylvain Emmanuel; Buchholz, Jörg
the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....
Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat
The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.
Dawes, Piers; Munro, Kevin J
It is widely recognized by hearing aid users and audiologists that a period of auditory acclimatization and adjustment is needed for new users to become accustomed to their devices. The aim of the present study was to test the idea that auditory acclimatization and adjustment to hearing aids involves a process of learning to "tune out" newly audible but undesirable sounds, which are described by new hearing aid users as annoying and distracting. It was hypothesized that (1) speech recognition thresholds in noise would improve over time for new hearing aid users, (2) distractibility to noise would reduce over time for new hearing aid users, (3) there would be a correlation between improved speech recognition in noise and reduced distractibility to background sounds, (4) improvements in speech recognition and distraction would be accompanied by self-report of reduced annoyance, and (5) improvements in speech recognition and distraction would be associated with higher general cognitive ability and more hearing aid use. New adult hearing aid users (n = 35) completed a test of aided speech recognition in noise (SIN) and a test of auditory distraction by background sound amplified by hearing aids on the day of fitting and 1, 7, 14, and 30 days post fitting. At day 30, participants completed self-ratings of the annoyance of amplified sounds. Daily hearing aid use was measured via hearing aid data logging, and cognitive ability was measured with the Wechsler Abbreviated Scale of Intelligence block design test. A control group of experienced hearing aid users (n = 20) completed the tests over a similar time frame. At day 30, there was no statistically significant improvement in SIN among new users versus experienced users. However, levels of hearing loss and hearing aid use varied widely among new users. A subset of new users with moderate hearing loss who wore their hearing aids at least 6 hr/day (n = 10) had significantly improved SIN (by ~3-dB signal to noise ratio
Yager, D D
Like other praying mantises, Hierodula membranacea has a single midline ear on the ventral surface of the metathorax. The ear comprises a deep groove with two tympana forming the walls. A tympanal organ on each side contains 30-40 scolopophorous sensillae with axons that terminate in the metathoracic ganglion in neuropil that does not match the auditory neuropil of other insects. Nymphal development of the mantis ear proceeds in three major stages: 1) The tympanal organ is completely formed with a full complement of sensillae before hatching; 2) the infolding and rotations that form the deep groove are completed primarily over the first half of nymphal development; and 3) over the last five instars (of ten), the tympana thicken and broaden to their adult size and shape, and the impedance-matching tracheal sacs also enlarge and move to become tightly apposed to the inner surfaces of the tympana. Auditory sensitivity gradually increases beginning with the fifth instar and closely parallels tympanum and tracheal sac growth. Late instar nymphs have auditory thresholds of 70-80 dB sound pressure level (SPL). Appropriate connections of afferents to a functional interneuronal system are clearly present by the eighth instar and possibly much earlier. The pattern of auditory system ontogeny in the mantis is similar to that in locusts and in noctuid moths, but it differs from crickets. In evolutionary terms, it is significant that the metathoracic anatomy of newly hatched mantis nymphs matches very closely the anatomy of the homologous regions in adult cockroaches, which are closely related to mantises but are without tympanal hearing, and in mantises that are thought to be primitively deaf.
Lewis, Doris Ruthy; Marone, Silvio Antonio Monteiro; Mendes, Beatriz C A; Cruz, Oswaldo Laercio Mendonça; Nóbrega, Manoel de
Created in 2007, COMUSA is a multiprofessional committee comprising speech therapy, otology, otorhinolaryngology and pediatrics with the aim of debating and countersigning auditory health actions for neonatal, lactating, preschool and school children, adolescents, adults and elderly persons. COMUSA includes representatives of the Brazilian Audiology Academy (Academia Brasileira de Audiologia or ABA), the Brazilian Otorhinolaryngology and Cervicofacial Surgery Association (Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico Facial or ABORL), the Brazilian Phonoaudiology Society (Sociedade Brasileira de Fonoaudiologia or SBFa), the Brazilian Otology Society (Sociedade Brasileira de Otologia or SBO), and the Brazilian Pediatrics Society (Sociedade Brasileira de Pediatria or SBP).
Barbour, Dennis L; Wang, Xiaoqin
Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally
Ro, A; Chernoff, G; MacRae, D; Orton, R B; Cadera, W
We obtained audiograms and auditory brainstem responses from 44 patients with Duane's retraction syndrome to assess the incidence and nature of hearing deficit. Of 44 patients, seven (15.9%) had evidence of hearing impairment. Three (6.8%) subjects had a temporary conductive hearing loss because of middle ear fluid, and another patient had hearing loss from Crouzon's disease. The remaining three (6.8%) patients demonstrated sensorineural hearing deficit. This hearing impairment was attributed to a cochlear lesion and not to a pontine lesion. We believe that the frequency of sensorineural hearing loss in these patients warrants hearing screening programs similar to those used for infants in neonatal intensive care units.
King, Wayne M; Lombardino, Linda J; Crandell, Carl C; Leonard, Christiana M
The primary objective of this study was to investigate the extent of comorbid auditory processing disorder (APD) in a group of adults with developmental dyslexia. An additional objective was to compare performance on auditory tasks to results from standardized tests of reading in an attempt to generate a clinically useful profile of developmental dyslexics with comorbid APD. A group of eleven persons with developmental dyslexia and 14 age- and intelligence-matched controls participated in the study. Behavioral audiograms, 226-Hz tympanograms, and word recognition scores were obtained binaurally from all subjects. Both groups were administered the frequency-pattern test (FPT) and duration-pattern test (DPT) monaurally (30 items per ear) in both the left and right ear. Gap detection results were obtained in both groups (binaural presentation) using narrowband noise centered at 1 kHz in an adaptive two-alternative forced-choice (2-AFC) paradigm. The FPT, DPT, and gap detection results were analyzed for interaural (where applicable), intergroup, and intragroup differences. Correlations between performance on the auditory tasks and the standardized tests of reading were examined. Additive logistic regression models were fit to the data to determine which auditory tests proved to be the best predictors of group membership. The persons with developmental dyslexia as a group performed significantly poorer than controls on both the FPT and DPT. Furthermore, the group differences were significant in both monaural conditions. On the FPT and DPT, five of the eleven participants with dyslexia performed below the widely used clinical criterion for APD of 70% correct in either ear. All five of these participants performed below criterion on the FPT, whereas four of the five additionally performed below 70% on the DPT. The data also were analyzed by fitting a series of stepwise logistic regression models, which indicated that gap detection did not significantly predict group
Shenton Martha E
Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by
PRAKASH KUMAR G
Auditory discrimination and learning in songbirds. 145. J. Biosci. 33(1) ... formation and/or storage. [Pinaud R and Terleph T A 2008 A songbird forebrain area potentially involved in auditory discrimination and memory formation; J. Biosci. ...... Otol. 96 101–112. Cynx J and Nottebohm F 1992 Role of gender, season, and.
Lüttke, C.S.; Ekman, M.; Gerven, M.A.J. van; Lange, F.P. de
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we
This paper aims to provide a review of the emerging Auditory Steady State Response in light of existing procedures for diagnosis of hearing loss in infants. Determining the type, degree, and configuration of hearing loss in infants is a challenge requiring sophisticated electrophysiological equipment of which Auditory ...
Bruin, N.M.W.J. de; Luijtelaar, E.L.J.M. van; Cools, A.R.; Ellenbroek, B.A.
RATIONALE: Auditory filtering disturbances, as measured in the sensory gating and prepulse inhibition (PPI) paradigms, have been linked to aberrant auditory information processing and sensory overload in schizophrenic patients. In both paradigms, the response to the second stimulus (S2) is
Langers, DRM; van Dijk, P; Backes, WH
Although it is known that responses in the auditory cortex are evoked predominantly contralateral to the side of stimulation, the lateralization of responses at lower levels in the human central auditory system has hardly been studied. Furthermore, little is known on the functional interactions
Bruin, N.M.W.J. de; Luijtelaar, E.L.J.M. van; Cools, A.R.; Ellenbroek, B.A.
Rationale: Auditory filtering disturbances, as measured in the sensory gating and prepulse inhibition (PPI) paradigms, have been linked to aberrant auditory information processing and sensory overload in schizophrenic patients. In both paradigms, the response to the second stimulus (S2) is
Vlaskamp, Chantal|info:eu-repo/dai/nl/413985679; Oranje, Bob|info:eu-repo/dai/nl/217177409; Madsen, Gitte Falcher; Møllegaard Jepsen, Jens Richardt; Durston, Sarah|info:eu-repo/dai/nl/243083912; Cantio, Cathriona; Glenthøj, Birte; Bilenberg, Niels
Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism are
Moore, David R; Halliday, Lorna F; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.
Terband, H.R.; van Brenk, F.J.; van Doornik-van der Zee, J.C.
Background/purpose: Several studies indicate a close relation between auditory and speech motor functions in children with speech sound disorders (SSD). The aim of this study was to investigate the ability to compensate and adapt for perturbed auditory feedback in children with SSD compared to
Nguyen, Andy; Cabrera, Densil
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
McCourt, Mark E; Leone, Lynnette M
We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.
Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.
Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a
de Wit, Ellen; Visser-Bochane, Margot I.; Steenbergen, Bert; van Dijk, Pim; van der Schans, Cees P.; Luinge, Margreet R.
Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit specific to the auditory modality or as a…
Bartels-Velthuis, A.A.; Jenner, J.A.; van de Willige, G.; van Os, J.; Wiersma, D.
Background Hearing voices occurs in middle childhood, but little is known about prevalence, aetiology and immediate consequences. Aims To investigate prevalence, developmental risk factors and behavioural correlates of auditory vocal hallucinations in 7- and 8-year-olds. Method Auditory vocal
Miller, Carol A.
Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…
Neijenhuis, C.A.M.; Stollman, M.H.P.; Snik, A.F.M.; Broek, P. van den
There is little standardized test material in Dutch to document central auditory processing disorders (CAPDs). Therefore, a new central auditory test battery was composed and standardized for use with adult populations and older children. The test battery comprised seven tests (words in noise,
Bailey, Frank S.; Yocum, Russell G.
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
Rukjær, Andreas Harbo; Hauen, Sigurd van; Ordoñez Pizarro, Rodrigo Eduardo
The basic frequency selectivity in the listener’s hearing is often characterized by auditory filters. These filters are determined through listening tests, which estimate the masking threshold as a function of frequency of the tone and the bandwidth of the masking sound. The auditory filters have...... at 1, 2, and 4 kHz for 10 young normal-hearing subjects....
Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B
Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Miller, Kimberly M; Finney, Glen R; Meador, Kimford J; Loring, David W
Dysnomia is typically assessed during neuropsychological evaluation through visual confrontation naming. Responsive naming to description, however, has been shown to have a more distributed representation in both fMRI and cortical stimulation studies. While naming deficits are common in dementia, the relative sensitivity of visual confrontation versus auditory responsive naming has not been directly investigated. The current study compared visual confrontation naming and auditory responsive naming in a dementia sample of mixed etiologies to examine patterns of performance across these naming tasks. A total of 50 patients with dementia of various etiologies were administered visual confrontation naming and auditory responsive naming tasks using stimuli that were matched in overall word frequency. Patients performed significantly worse on auditory responsive naming than visual confrontation naming. Additionally, patients with mixed Alzheimer's disease/vascular dementia performed more poorly on auditory responsive naming than did patients with probable Alzheimer's disease, although no group differences were seen on the visual confrontation naming task. Auditory responsive naming correlated with a larger number of neuropsychological tests of executive function than did visual confrontation naming. Auditory responsive naming appears to be more sensitive to effects of increased of lesion burden compared to visual confrontation naming. We believe that this reflects more widespread topographical distribution of auditory naming sites within the temporal lobe, but may also reflect the contributions of working memory and cognitive flexibility to performance.
Kunert, R.; Jongman, S.R.
Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of
Tru E Kwong
Full Text Available Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes. Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Kwong, Tru E; Brachman, Kyle J
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto
Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rønne, Filip Munch; Dau, Torsten; Harte, James
A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...
Loehr, J.D.; Palmer, C.
THE CURRENT STUDY EXAMINED HOW AUDITORY AND kinematic information influenced pianists' ability to synchronize musical sequences with a metronome. Pianists performed melodies in which quarter-note beats were subdivided by intervening eighth notes that resulted from auditory information (heard tones),
Shahidipour, Zahra; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Khosravifard, Elham
Hearing loss is one of the most common problems in elderly people. Functional side effects of hearing loss are various. Due to the fact that hearing loss is the common impairment in elderly people; the importance of its possible effects on auditory memory is undeniable. This study aims to focus on the hearing loss effects on auditory memory. Dichotic Auditory Memory Test (DVMT) was performed on 47 elderly people, aged 60 to 80; that were divided in two groups, the first group consisted of elderly people with hearing range of 24 normal and the second one consisted of 23 elderly people with bilateral symmetrical ranged from mild to moderate Sensorineural hearing loss in the high frequency due to aging in both genders. Significant difference was observed in DVMT between elderly people with normal hearing and those with hearing loss (Pauditory verbal memory. This result depicts the importance of auditory intervention to make better communicational skills and therefore auditory memory in this population.
Pfordresher, Peter Q
Recent research has shown that music training enhances music-related sensorimotor associations, such as the relationship between a key press on the keyboard and its associated musical pitch (auditory feedback). Such results suggest that the role of auditory feedback in performance may be based on learned associations that are task specific. Here, results from various studies will be presented that suggest that the real state of affairs is more complex. Several recent studies have shown similar effects of altered auditory feedback during piano performance for pianists and individuals with no piano training. Other recent research suggests dramatic differences between pianists and nonmusicians concerning the influence of auditory feedback on melody switching that suggest greater influence of auditory feedback among nonmusicians than pianists. Taken together, results suggest that musical training refines preexisting sensorimotor associations. © 2012 New York Academy of Sciences.
Afra, Pegah; Anderson, Jeffrey; Funke, Michael; Johnson, Michael; Matsuo, Fumisuke; Constantino, Tawnya; Warner, Judith
We present a case of acquired auditory-visual synesthesia and its neurophysiological investigation in a healthy 42-year-old woman. She started experiencing persistent positive and intermittent negative visual phenomena at age 37 followed by auditory-visual synesthesia. Her neurophysiological investigation included video-EEG, fMRI, and MEG. Auditory stimuli (700 Hz, 50 ms duration, 0.5 s ISI) were presented binaurally at 60 db above the hearing threshold in a dark room. The patient had bilateral symmetrical auditory-evoked neuromagnetic responses followed by an occipital-evoked field 16.3 ms later. The activation of occipital cortex following auditory stimuli may represent recruitment of existing cross-modal sensory pathways.
Full Text Available The intrauterine environment allows the fetus to begin hearing with low frequency sounds in a protected fashion, ensuring optimal development of the peripheral and central auditory system. However, the auditory nursery provided by the womb vanishes once the preterm newborn enters the high-frequency (HF noisy environment of the neonatal intensive care unit (NICU. The present article draws a concerning line between auditory system development and HF noise in the NICU, which is not necessarily conducive to fostering this development. Overexposure to HF noise during critical periods disrupts the functional organization of auditory cortical circuits. As a result, we theorize, the ability to tune out noise and extract acoustic information in a noisy environment may be impaired, leading to a variety of auditory, language, and attention disorders. Additionally, HF noise in the NICU often masks human speech sounds potentially important to the preterm infant, whose exposure to linguistic stimuli is already restricted. Understanding the impact of the sound environment on the developing auditory system is an important first step in meeting the developmental demands of preterm newborns undergoing intensive care.
Can, Handan; Doğutepe, Elvin; Torun Yazıhan, Nakşidil; Korkman, Hamdi; Erdoğan Bakar, Emel
Auditory Verbal Learning Test (AVLT) is frequently used in neuropsychology literature to comprehensively assess the memory. The test measures verbal learning as immediate and delayed free recall, recognition, and retroactive and proactive interference. Adaptation of AVLT to the Turkish society has been completed, whereas research and development studies are still underway. The purpose of the present study is to investigate the construct validity of the test in order to contribute to the research and development process. In line with this purpose, the research data were obtained from 78 healthy participants aged between 20 and 69. The exclusion criteria included neurological and/or psychiatric disorders as well as untreated auditory/visual disorders. AVLT was administered to participants individually by two trained psychologists. Principal component analysis that is used to investigate the components represented by the AVLT scores consisted of learning, free recall and recognition, in line with the construct of the test. Distractors were also added to these two components in structural equation model. Analyses were carried out on descriptive level to establish the relatioships between age, education, gender and AVLT scores. These findings, which are consistent with the literature indicating that memory is affected by the developmental process, suggest that learning/free recall, recognition, and distractor scores of the AVLT demonstrate a component pattern consistent with theoretical knowledge. This conclusion suggests that AVLT is a valid measurement test for the Turkish society.
Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R
Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.
Full Text Available Pediatric hearing evaluation based on pure tone audiometry does not always reflect how a child hears in everyday life. This practice is inappropriate when evaluating the difficulties children experiencing auditory processing disorder (APD in school or on the playground. Despite the marked increase in research on pediatric APD, there remains limited access to proper evaluation worldwide. This perspective article presents five common misconceptions of APD that contribute to inappropriate or limited management in children experiencing these deficits. The misconceptions discussed are (1 the disorder cannot be diagnosed due to the lack of a gold standard diagnostic test; (2 making generalizations based on profiles of children suspected of APD and not diagnosed with the disorder; (3 it is best to discard an APD diagnosis when another disorder is present; (4 arguing that the known link between auditory perception and higher cognition function precludes the validity of APD as a clinical entity; and (5 APD is not a clinical entity. These five misconceptions are described and rebutted using published data as well as critical thinking on current available knowledge on APD.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in
Zupan, Barbra; Sussman, Joan E.
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…
Jacks, Adam; Haley, Katarina L.
Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…
Parving, A; Salomon, G; Elberling, Claus
An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...
Noles, Nicholaus S.; Gelman, Susan A.
The goal of the present study is to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study is motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for auditory cues. Across three experiments, preschool children and adults were trained to use paired audio-visual cues to predict the location of a target. At test, the cues were switched so that auditory cues indicated one location and visual cues indicated the opposite location. In contrast to prior studies, preschool age children did not exhibit auditory dominance. Instead, children and adults flexibly shifted their preferences as a function of the degree of contrast within each modality (with high contrast leading to greater use). PMID:22513210
Hironori Kuga, M.D.
We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.
Vestergaard, Martin David
of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...... no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...
Maczko, Kristin A; Knudsen, Phyllis F; Knudsen, Eric I
The nucleus isthmi pars parvocellularis (Ipc) is a midbrain cholinergic nucleus that shares reciprocal, topographic connections with the optic tectum (OT). Ipc neurons project to spatially restricted columns in the OT, contacting essentially all OT layers in a given column. Previous research characterizes the Ipc as a visual processor. We found that, in the barn owl, the Ipc responds to auditory as well as to visual stimuli. Auditory responses were tuned broadly for frequency, but sharply for spatial cues. We measured the tuning of Ipc units to binaural sound localization cues, including interaural timing differences (ITDs) and interaural level differences (ILDs). Units in the Ipc were tuned to specific values of both ITD and ILD and were organized systematically according to their ITD and ILD tuning, forming a map of space. The auditory space map aligned with the visual space map in the Ipc. These results demonstrate that the Ipc encodes the spatial location of objects, independent of stimulus modality. These findings, combined with the precise pattern of projections from the Ipc to the OT, suggest that the role of the Ipc is to regulate the sensitivity of OT neurons in a space-specific manner.
Razak, Khaleel A; Fuzessery, Zoltan M
A consistent organizational feature of auditory cortex is a clustered representation of binaural properties. Here we address two questions. What is the intrinsic organization of binaural clusters and to what extent does intracortical processing contribute to binaural representation. We address these issues in the auditory cortex of the pallid bat. The pallid bat listens to prey-generated noise transients to localize and hunt terrestrial prey. As in other species studied, binaural clusters are present in the auditory cortex of the pallid bat. One cluster contains neurons that require binaural stimulation to be maximally excited, and are commonly termed predominantly binaural (PB) neurons. These neurons do not respond to monaural stimulation of either ear but show a peaked sensitivity to interaural intensity differences (IID) centered near 0 dB IID. We show that the peak IID varies systematically within this cluster. The peak IID is also correlated with the best frequency (BF) of neurons within this cluster. In addition, the IID selectivity of PB neurons is shaped by intracortical GABAergic input. Iontophoresis of GABA(A) receptor antagonists on PB neurons converts a majority of them to binaurally inhibited (EI) neurons that respond best to sounds favoring the contralateral ear. These data indicate that the cortex does not simply inherit binaural properties from lower levels but instead sharpens them locally through intracortical inhibition. The IID selectivity of the PB cluster indicates that the pallid bat cortex contains an increased representation of the frontal space that may underlie increased localization accuracy in this region.
Razak, Khaleel A.
A consistent organizational feature of auditory cortex is a clustered representation of binaural properties. Here we address two questions. What is the intrinsic organization of binaural clusters and to what extent does intracortical processing contribute to binaural representation. We address these issues in the auditory cortex of the pallid bat. The pallid bat listens to prey-generated noise transients to localize and hunt terrestrial prey. As in other species studied, binaural clusters are present in the auditory cortex of the pallid bat. One cluster contains neurons that require binaural stimulation to be maximally excited, and are commonly termed predominantly binaural (PB) neurons. These neurons do not respond to monaural stimulation of either ear but show a peaked sensitivity to interaural intensity differences (IID) centered near 0 dB IID. We show that the peak IID varies systematically within this cluster. The peak IID is also correlated with the best frequency (BF) of neurons within this cluster. In addition, the IID selectivity of PB neurons is shaped by intracortical GABAergic input. Iontophoresis of GABAA receptor antagonists on PB neurons converts a majority of them to binaurally inhibited (EI) neurons that respond best to sounds favoring the contralateral ear. These data indicate that the cortex does not simply inherit binaural properties from lower levels but instead sharpens them locally through intracortical inhibition. The IID selectivity of the PB cluster indicates that the pallid bat cortex contains an increased representation of the frontal space that may underlie increased localization accuracy in this region. PMID:20484524
Meyer, Martin; Elmer, Stefan; Ringli, Maya; Oechslin, Mathias S; Baumann, Simon; Jancke, Lutz
This event-related brain potential study aims to contribute to the present debate regarding the effect of musical training on the maturation of the human auditory nervous system. To address this issue, we recorded the mismatch negativity (MMN) evoked by violin and pure sine-wave tones in a group of 7.5- to 12-year-old children who had either several years of musical experience with Suzuki violin lessons, or no musical training. The strength of the MMN responses to violin tones evident in the Suzuki students clearly surpassed responses in controls; the reverse pattern was observed for sine-wave tones. Suzuki students showed significantly shorter MMN latencies to violin tones than to pure tones; the MMN latency did not differ significantly between pure tones and violin sounds in the control group. Thus, our data provide general evidence of how and to what extent extensive musical experience affects the maturation of human auditory function at multiple levels, namely, accuracy and speed of auditory discrimination processing. Our findings add to the present understanding of neuroplastic organization and function of the mammalian nervous system. Furthermore, behavioural recordings obtained from the participating children provide corroborating evidence for a relationship between the duration and intensity of training, the specific sensitivity to instrumental timbre, and pitch recognition abilities. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Lindemann, Kristy L.; Reichmuth-Kastak, Colleen; Schusterman, Ronald J.
The model of stimulus equivalence describes how perceptually dissimilar stimuli can become interrelated to form useful categories both within and between the sensory modalities. A recent experiment expanded upon prior work with a California sea lion by examining stimulus classification across the auditory and visual modalities. Acoustic stimuli were associated with an exemplar from one of two pre-existing visual classes in a matching-to-sample paradigm. After direct training of these associations, the sea lion showed spontaneous transfer of the new auditory stimuli to the remaining members of the visual classes. The sea lion's performance on this cross-modal equivalence task was similar to that shown by human subjects in studies of emergent word learning and reading comprehension. Current research with the same animal further examines how stimulus classes can be expanded across modalities. Fast-mapping techniques are used to rapidly establish new auditory-visual relationships between acoustic cues and multiple arbitrary visual stimuli. Collectively, this research illustrates complex cross-modal performances in a highly experienced subject and provides insight into how animals organize information from multiple sensory modalities into meaningful representations.
Farley, Brandon J.
How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts. PMID:26269558
Schulze, K; Mueller, K; Koelsch, S
Working memory (WM) performance in humans can be improved by structuring and organizing the material to be remembered. For visual and verbal information, this process of structuring has been associated with the involvement of a prefrontal-parietal network, but for non-verbal auditory material, the brain areas that facilitate WM for structured information have remained elusive. Using functional magnetic resonance imaging, this study compared neural correlates underlying encoding and rehearsal of auditory WM for structured and unstructured material. Musicians and non-musicians performed a WM task on five-tone sequences that were either tonally structured (with all tones belonging to one tonal key) or tonally unstructured (atonal) sequences. Functional differences were observed for musicians (who are experts in the music domain), but not for non-musicians - The right pars orbitalis was activated more strongly in musicians during the encoding of unstructured (atonal) vs. structured (tonal) sequences. In addition, data for musicians showed that a lateral (pre)frontal-parietal network (including the right premotor cortex, right inferior precentral sulcus and left intraparietal sulcus) was activated during WM rehearsal of structured, as compared with unstructured, sequences. Our findings indicate that this network plays a role in strategy-based WM for non-verbal auditory information, corroborating previous results showing a similar network for strategy-based WM for visual and verbal information. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Ebendt, R; Friedel, J; Kalmring, K
The projection patterns of morphologically and functionally identified auditory and auditory-vibratory receptor cells of receptor organs (the crista acustica and the intermediate organ) in the foreleg of the tettigoniid Psorodonotus illyricus, were investigated with combined recording and staining techniques, and subsequent histological examination and morphometric measurements. With the application of a computer program (AutoCAD), three-dimensional reconstructions of the axon end branches of receptor cells within the neuropile of the anterior Ring Tract (aRT) were made, in order to determine, the entire shape of each, the pattern and density of the end branches, and the positions of the target areas within the auditory neuropile. Clear differences for different functional types of receptors were found.
Nourski, Kirill V; Steinschneider, Mitchell; Rhone, Ariane E; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob
High gamma power has become the principal means of assessing auditory cortical activation in human intracranial studies, albeit at the expense of low frequency local field potentials (LFPs). It is unclear whether limiting analyses to high gamma impedes ability of clarifying auditory cortical organization. We compared the two measures obtained from posterolateral superior temporal gyrus (PLST) and evaluated their relative utility in sound categorization. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli (consonant-vowel syllables varying in voicing and place of articulation and control tones) elicited robust evoked potentials and high gamma activity on PLST. LFPs had greater across-subject variability, yet yielded higher classification accuracy, relative to high gamma power. Classification was enhanced by including temporal detail of LFPs and combining LFP and high gamma. We conclude that future studies should consider utilizing both LFP and high gamma when investigating the functional organization of human auditory cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
Moerel, Michelle; De Martino, Federico; Formisano, Elia
Auditory cortical processing of complex meaningful sounds entails the transformation of sensory (tonotopic) representations of incoming acoustic waveforms into higher-level sound representations (e.g., their category). However, the precise neural mechanisms enabling such transformations remain largely unknown. In the present study, we use functional magnetic resonance imaging (fMRI) and natural sounds stimulation to examine these two levels of sound representation (and their relation) in the human auditory cortex. In a first experiment, we derive cortical maps of frequency preference (tonotopy) and selectivity (tuning width) by mathematical modeling of fMRI responses to natural sounds. The tuning width maps highlight a region of narrow tuning that follows the main axis of Heschl's gyrus and is flanked by regions of broader tuning. The narrowly tuned portion on Heschl's gyrus contains two mirror-symmetric frequency gradients, presumably defining two distinct primary auditory areas. In addition, our analysis indicates that spectral preference and selectivity (and their topographical organization) extend well beyond the primary regions and also cover higher-order and category-selective auditory regions. In particular, regions with preferential responses to human voice and speech occupy the low-frequency portions of the tonotopic map. We confirm this observation in a second experiment, where we find that speech/voice selective regions exhibit a response bias toward the low frequencies characteristic of human voice and speech, even when responding to simple tones. We propose that this frequency bias reflects the selective amplification of relevant and category-characteristic spectral bands, a useful processing step for transforming a sensory (tonotopic) sound image into higher level neural representations.
Full Text Available Background: Diabetes mellitus is a complex metabolic disorder whose detrimental effects on various organ systems, including the nervous system are well known. Aim: This study was conducted to determine the changes in the brainstem auditory evoked potentials (BAEP in patients with type 2 diabetes mellitus. Materials and Methods: In this case-control study, 116 females with type 2 diabetes and 100 age matched, healthy female volunteers were selected. The brainstem auditory evoked potentials (BAEP were recorded with RMS EMG EP Marc-II Channel machine. The measures included latencies of waves I, II, III, IV, V and Interpeak latencies (IPL I-III, III-V and I-V separately for both ears. Data was analysed statistically with SPSS software v13.0. Results: It was found that IPL I-III was significantly delayed (P = 0.028 only in the right ear, while the latency of wave V and IPL I-V showed a significant delay bilaterally (P values for right ear being 0.021 and 0.0381 respectively while those for left ear being 0.028 and 0.016 respectively, in diabetic females. However, no significant difference (P > 0.05 was found between diabetic and control subjects as regards to the latencies of waves I, II, III, IV and IPL III-V bilaterally and IPL I-III unilaterally in the left ear. Also, none of the BAEP latencies were significantly correlated with either the duration of disease or with fasting blood glucose levels in diabetics. Conclusions: Therefore, it could be concluded that diabetes patients have an early involvement of central auditory pathway, which can be detected quite accurately with the help of auditory evoked potential studies.
Full Text Available Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS delivered in the primary and secondary auditory fields (A1 and A2, respectively. After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics.
Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in
Encina Llamas, Gerard; M. Harte, James; Epp, Bastian
cause auditory nerve fiber (ANF) deafferentation in predominantly low-spontaneous rate (SR) fibers. In the present study, auditory steadystate response (ASSR) level growth functions were measured to evaluate the applicability of ASSR to assess compression and the ability to code intensity fluctuations...... at high stimulus levels. Level growth functions were measured in normal-hearing adults at stimulus levels ranging from 20 to 90 dB SPL. To evaluate compression, ASSR were measured for multiple carrier frequencies simultaneously. To evaluate intensity coding at high intensities, ASSR were measured using....... The results indicate that the slope of the ASSR level growth function can be used to estimate peripheral compression simultaneously at four frequencies below 60 dB SPL, while the slope above 60 dB SPL may provide information about the integrity of intensity coding of low-SR fibers....
Full Text Available Background: In schizophrenic clients, self-care strategies against auditory hallucinations can decrease disturbances results in hallucination. This study was aimed to assess frequency of self-care strategies against auditory hallucinations in paranoid schizophrenic patients, hospitalized in Shafa Hospital.Materials and Method: This was a descriptive study on 201 patients with paranoid schizophrenia hospitalized in psychiatry unit with convenience sampling in Rasht. The gathered data consists of two parts, first unit demographic characteristic and the second part, self- report questionnaire include 38 items about self-care strategies.Results: There were statistically significant relationship between demographic variables and knowledg effect and self-care strategies against auditory hallucinaions. Sex with phisical domain p0.07, marriage status with cognitive domain (p>0.07 and life status with behavioural domain (p>0.01. 53.2% of reported type of our auditory hallucinations were command hallucinations, furtheremore the most effective self-care strategies against auditory hallucinations were from physical domain and substance abuse (82.1% was the most effective strategies in this domain.Conclusion: The client with paranoid schizophrenia used more than physical domain strategies against auditory hallucinaions and this result highlight need those to approprait nursing intervention. Instruction and leading about selection the effective self-care strategies against auditory ha
Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.
Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)
We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)
Georgiev, Dejan; Jahanshahi, Marjan; Dreo, Jurij; Čuš, Anja; Pirtošek, Zvezdan; Repovš, Grega
Parkinson's disease (PD) patients show signs of cognitive impairment, such as executive dysfunction, working memory problems and attentional disturbances, even in the early stages of the disease. Though motor symptoms of the disease are often successfully addressed by dopaminergic medication, it still remains unclear, how dopaminergic therapy affects cognitive function. The main objective of this study was to assess the effect of dopaminergic medication on visual and auditory attentional processing. 14 PD patients and 13 matched healthy controls performed a three-stimulus auditory and visual oddball task while their EEG was recorded. The patients performed the task twice, once on- and once off-medication. While the results showed no significant differences between PD patients and controls, they did reveal a significant increase in P3 amplitude on- vs. off-medication specific to processing of auditory distractors and no other stimuli. These results indicate significant effect of dopaminergic therapy on processing of distracting auditory stimuli. With a lack of between group differences the effect could reflect either 1) improved recruitment of attentional resources to auditory distractors; 2) reduced ability for cognitive inhibition of auditory distractors; 3) increased response to distractor stimuli resulting in impaired cognitive performance; or 4) hindered ability to discriminate between auditory distractors and targets. Further studies are needed to differentiate between these possibilities. Copyright © 2015 Elsevier B.V. All rights reserved.
Kikuchi, Yoshikazu; Okamoto, Tsuyoshi; Ogata, Katsuya; Hagiwara, Koichi; Umezaki, Toshiro; Kenjo, Masamutsu; Nakagawa, Takashi; Tobimatsu, Shozo
In a previous magnetoencephalographic study, we showed both functional and structural reorganization of the right auditory cortex and impaired left auditory cortex function in people who stutter (PWS). In the present work, we reevaluated the same dataset to further investigate how the right and left auditory cortices interact to compensate for stuttering. We evaluated bilateral N100m latencies as well as indices of local and inter-hemispheric phase synchronization of the auditory cortices. The left N100m latency was significantly prolonged relative to the right N100m latency in PWS, while healthy control participants did not show any inter-hemispheric differences in latency. A phase-locking factor (PLF) analysis, which indicates the degree of local phase synchronization, demonstrated enhanced alpha-band synchrony in the right auditory area of PWS. A phase-locking value (PLV) analysis of inter-hemispheric synchronization demonstrated significant elevations in the beta band between the right and left auditory cortices in PWS. In addition, right PLF and PLVs were positively correlated with stuttering frequency in PWS. Taken together, our data suggest that increased right hemispheric local phase synchronization and increased inter-hemispheric phase synchronization are electrophysiological correlates of a compensatory mechanism for impaired left auditory processing in PWS. Published by Elsevier B.V.
Oltedal, Leif; Hugdahl, Kenneth
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Ruytjens, Liesbet; Georgiadis, Janniko R; Holstege, Gert; Wit, Hero P; Albers, Frans W J; Willemsen, Antoon T M
We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies.
Full Text Available Abstract Introduction: It has been demonstrated that long-term Conductive Hearing Loss (CHL may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP. It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. Objective: This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. Methods: During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control, aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. Results: The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p = 0.004; left: p 0.05. Conclusion: The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended.
Chen, L; Toth, M
Fragile X syndrome is the most prevalent cause of mental retardation. It is usually caused by the transcriptional inactivation of the FMR-1 gene. Although the cognitive defect is the most recognized symptom of fragile X syndrome, patients also show behavioral problems such as hyperarousal, hyperactivity, autism, aggression, anxiety and increased sensitivity to sensory stimuli. Here we investigated whether fragile X mice (fmr-1 gene knockout mice) exhibit abnormal sensitivity to sensory stimuli. First, hyperreactivity of fragile X mice to auditory stimulus was indicated in the prepulse inhibition paradigm. A moderately intense prepulse tone, that suppresses startle response to a strong auditory stimulus, elicited a significantly stronger effect in fragile X than in control mice. Second, sensory hyperreactivity of fragile X mice was demonstrated by a high seizure susceptibility to auditory stimulation. Selective induction of c-Fos, an early-immediate gene product, indicated that seizures involve auditory brainstem and thalamic nuclei. Audiogenic seizures were not due to a general increase in brain excitability because three different chemical convulsants (kainic acid, bicuculline and pentylenetetrazole) elicited similar effects in fragile X and wild-type mice. These data are consistent with the increased responsiveness of fragile X patients to auditory stimuli. The auditory hypersensitivity suggests an abnormal processing in the auditory system of fragile X mice, which could provide a useful model to study the molecular and cellular changes underlying fragile X syndrome.
Shiller, Douglas M; Rochon, Marie-Lyne
Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.
Shiller, Douglas M.; Rochon, Marie-Lyne
Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067
Norrix, Linda W.; Plante, Elena; Vance, Rebecca
Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…
Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André
Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Michael A. Nees
Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3 to 5 s) are compared, and a same or different judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detai...
Full Text Available Research on auditory verbal hallucinations (AVHs indicates that AVH schizophrenia patients show greater abnormalities on tasks requiring recognition of affective prosody (AP than non-AVH patients. Detecting AP requires accurate perception of manipulations in pitch, amplitude and duration. Schizophrenia patients with AVHs also experience difficulty detecting these acoustic manipulations; with a number of theorists speculating that difficulties in pitch, amplitude and duration discrimination underlie AP abnormalities. This study examined whether both AP and these aspects of auditory processing are also impaired in first degree relatives of persons with AVHs. It also examined whether pitch, amplitude and duration discrimination were related to AP, and to hallucination proneness. Unaffected relatives of AVH schizophrenia patients (N=19 and matched healthy controls (N=33 were compared using tone discrimination tasks, an AP task, and clinical measures. Relatives were slower at identifying emotions on the AP task (p =.002, with secondary analysis showing this was especially so for happy (p = .014 and neutral (p =.001 sentences. There was a significant interaction effect for pitch between tone deviation level and group (p = .019, and relatives performed worse than controls on amplitude discrimination and duration discrimination. AP performance for happy and neutral sentences was significantly correlated with amplitude perception. Lastly, AVH proneness in the entire sample was significantly correlated with pitch discrimination (r = .44 and pitch perception was shown to predict AVH proneness in the sample (p = .005. These results suggest basic impairments in auditory processing are present in relatives of AVH patients; they potentially underlie processing speed in AP tasks, and predict AVH proneness. This indicates auditory processing deficits may be a core feature of AVHs in schizophrenia, and are worthy of further study as a potential endophenotype for
Full Text Available Background and Aim: Physiologic measures of cochlear and auditory nerve function may be of assis¬tance in distinguishing between hearing disorders due primarily to auditory nerve impairment from those due primarily to cochlear hair cells dysfunction. The goal of present study was to measure of co-chlear responses (otoacoustic emissions and cochlear microphonics and auditory brainstem response in some adults with auditory neuropathy/ dys-synchrony and subjects with normal hearing. Materials and Methods: Patients were 16 adults (32 ears in age range of 14-30 years with auditory neu¬ropathy/ dys-synchrony and 16 individuals in age range of 16-30 years from both sexes. The results of transient otoacoustic emissions, cochlear microphonics and auditory brainstem response measures were compared in both groups and the effects of age, sex, ear and degree of hearing loss were studied. Results: The pure-tone average was 48.1 dB HL in auditory neuropathy/dys-synchrony group and the fre¬quency of low tone loss and flat audiograms were higher among other audiogram's shapes. Transient oto¬acoustic emissions were shown in all auditory neuropathy/dys-synchrony people except two cases and its average was near in both studied groups. The latency and amplitude of the biggest reversed co-chlear microphonics response were higher in auditory neuropathy/dys-synchrony patients than control peo¬ple significantly. The correlation between cochlear microphonics amplitude and degree of hearing loss was not significant, and age had significant effect in some cochlear microphonics measures. Audi-tory brainstem response had no response in auditory neuropathy/dys-synchrony patients even with low stim¬uli rates. Conclusion: In adults with speech understanding worsen than predicted from the degree of hearing loss that suspect to auditory neuropathy/ dys-synchrony, the frequency of low tone loss and flat audiograms are higher. Usually auditory brainstem response is absent in
Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.
Christopher I Petkov
Full Text Available Anatomical studies propose that the primate auditory cortex contains more fields than have actually been functionally confirmed or described. Spatially resolved functional magnetic resonance imaging (fMRI with carefully designed acoustical stimulation could be ideally suited to extend our understanding of the processing within these fields. However, after numerous experiments in humans, many auditory fields remain poorly characterized. Imaging the macaque monkey is of particular interest as these species have a richer set of anatomical and neurophysiological data to clarify the source of the imaged activity. We functionally mapped the auditory cortex of behaving and of anesthetized macaque monkeys with high resolution fMRI. By optimizing our imaging and stimulation procedures, we obtained robust activity throughout auditory cortex using tonal and band-passed noise sounds. Then, by varying the frequency content of the sounds, spatially specific activity patterns were observed over this region. As a result, the activity patterns could be assigned to many auditory cortical fields, including those whose functional properties were previously undescribed. The results provide an extensive functional tessellation of the macaque auditory cortex and suggest that 11 fields contain neurons tuned for the frequency of sounds. This study provides functional support for a model where three fields in primary auditory cortex are surrounded by eight neighboring "belt" fields in non-primary auditory cortex. The findings can now guide neurophysiological recordings in the monkey to expand our understanding of the processing within these fields. Additionally, this work will improve fMRI investigations of the human auditory cortex.
Tierney, Adam; Kraus, Nina
Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.
Bicak, Mehmet M. A.
Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.
Simone Fiuza Regaçone
Full Text Available The objective of this study was to evaluate the association between rest heart rate (HR and the components of the auditory evoked-related potentials (ERPs at rest in women. We investigated 21 healthy female university students between 18 and 24 years old. We performed complete audiological evaluation and measurement of heart rate for 10 minutes at rest (heart rate monitor Polar RS800CX and performed ERPs analysis (discrepancy in frequency and duration. There was a moderate negative correlation of the N1 and P3a with rest HR and a strong positive correlation of the P2 and N2 components with rest HR. Larger components of the ERP are associated with higher rest HR.
Féron, François-Xavier; Frissen, Ilja; Boissinot, Julien; Guastavino, Catherine
Three experiments are reported, which investigated the auditory velocity thresholds beyond which listeners are no longer able to perceptually resolve a smooth circular trajectory. These thresholds were measured for band-limited noises, white noise, and harmonic sounds (HS), and in different acoustical environments. Experiments 1 and 2 were conducted in an acoustically dry laboratory. Observed thresholds varied as a function of stimulus type and spectral content. Thresholds for band-limited noises were unaffected by center frequency and equal to that of white noise. For HS, however, thresholds decreased as the fundamental frequency of the stimulus increased. The third experiment was a replication of the second in a reverberant concert hall, which produced qualitatively similar results except that thresholds were significantly higher than in the acoustically dry laboratory.
Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari
Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music. © 2012 New York Academy of Sciences.
Picton, T. W.; Hillyard, S. A.; Krausz, H. I.; Galambos, R.
Fifteen distinct components can be identified in the scalp recorded average evoked potential to an abrupt auditory stimulus. The early components occurring in the first 8 msec after a stimulus represent the activation of the cochlea and the auditory nuclei of the brainstem. The middle latency components occurring between 8 and 50 msec after the stimulus probably represent activation of both auditory thalamus and cortex but can be seriously contaminated by concurrent scalp muscle reflex potentials. The longer latency components occurring between 50 and 300 msec after the stimulus are maximally recorded over fronto-central scalp regions and seem to represent widespread activation of frontal cortex.
Camila Maia Rabelo
Full Text Available INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady-state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady-state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz. The difference between auditory steady-state response-estimated thresholds and behavioral thresholds (audiometric evaluation was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady-state response-estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS lesions has shown that individuals with CANS lesions present a greater difference between ASSR-estimated thresholds and actual behavioral thresholds; ASSR-estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR-estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady
A time-frequency auditory model is presented. The model uses the wavelet packet analysis as the preprocessor. The auditory filters are modelled by the rounded exponential filters, and the excitation is smoothed by a window function. By comparing time-frequency excitation patterns it is shown...... that the change in the time-frequency excitation pattern introduced when a test tone at masked threshold is added to the masker is approximately equal to 7 dB for all types of maskers. The classic detection ratio therefore overrates the detection efficiency of the auditory system....
Factor, Stewart A; Molho, Eric S
Psychotic symptoms are commonly reported in patients with Parkinson disease (PD). In particular, patients experience nonthreatening visual hallucinations that can occur with insight (so called hallucinosis) or without. Auditory hallucinations are uncommon, and schizophrenialike symptoms such as pejorative and threatening auditory hallucinations and delusions that are persecutory, referential, somatic, religious, or grandiose have rarely been reported. The authors present 2 PD patients who experienced threatening auditory hallucinations, without visual hallucinations, and schizophrenialike delusions with detailed description of the clinical phenomenology including 1 patient with Cotard syndrome.
Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien
features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested...... whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...
Picton, T. W.; Hillyard, S. A.
Attention directed toward auditory stimuli, in order to detect an occasional fainter 'signal' stimulus, caused a substantial increase in the N1 (83 msec) and P2 (161 msec) components of the auditory evoked potential without any change in preceding components. This evidence shows that human auditory attention is not mediated by a peripheral gating mechanism. The evoked response to the detected signal stimulus also contained a large P3 (450 msec) wave that was topographically distinct from the preceding components. This late positive wave could also be recorded in response to a detected omitted stimulus in a regular train and therefore seemed to index a stimulus-independent perceptual decision process.
Winawer, Melodie R.; Hauser, W. Allen; Pedley, Timothy A.
The authors previously reported linkage to chromosome 10q22-24 for autosomal dominant partial epilepsy with auditory features. This study describes seizure semiology in the original linkage family in further detail. Auditory hallucinations were most common, but other sensory symptoms (visual, olfactory, vertiginous, and cephalic) were also reported. Autonomic, psychic, and motor symptoms were less common. The clinical semiology points to a lateral temporal seizure origin. Auditory hallucinations, the most striking clinical feature, are useful for identifying new families with this synome. PMID:10851389
Hanna-Pladdy, Brenda; Choi, Hyun
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. (c) 2010 APA, all rights reserved.
Favrot, Sylvain Emmanuel
to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... environments (VAEs) with an array of loudspeakers. The LoRA system combines state-of-the-art acoustic room models with sound-field reproduction techniques. Limitations of these two techniques were taken into consideration together with the limitations of the human auditory system to localize sounds...
Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R
Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Niels Chr. eHansen
Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr; Pearce, Marcus T
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr.; Pearce, Marcus T.
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
Horowitz, Seth S.; Tanyu, Leslie H.; Simmons, Andrea Megela
Sensory development can be dependent on input from multiple modalities. During metamorphic development, ranid frogs exhibit rapid reorganization of pathways mediating auditory, vestibular, and lateral line modalities as the animal transforms from an aquatic to an amphibious form. Here we show that neural sensitivity to the underwater particle motion component of sound follows a different developmental trajectory than that of the pressure component. Throughout larval stages, cells in the medial vestibular nucleus show best frequencies to particle motion in the range from 15 to 65 Hz, with displacement thresholds of <10 μm. During metamorphic climax, best frequencies significantly increase, and sensitivity to lower-frequency (<25 Hz) stimuli tends to decline. These findings suggest that continued sensitivity to particle motion may compensate for the considerable loss of sensitivity to pressure waves observed during the developmental deaf period. Transport of a lipophilic dye from peripheral end organs to the dorsal medulla shows that fibers from the saccule in the inner ear and from the anterior lateral line both terminate in the medial vestibular nucleus. Saccular projections remain stable across larval development, whereas lateral line projections degenerate during metamorphic climax. Sensitivity to particle motion may be based on multimodal input early in development and on saccular input alone during the transition to amphibious life. PMID:17251417
Młynarski, Wiktor; McDermott, Josh H
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Wilson, Wayne J; Arnott, Wendy; Henning, Caroline
To systematically review the peer-reviewed literature on electrophysiological outcomes following auditory training (AT) in school-age children with (central) auditory processing disorder ([C]APD). A systematic review. Searches of 16 electronic databases yielded four studies involving school-aged children whose auditory processing deficits had been confirmed in a manner consistent with ASHA (2005) and AAA (2010) and compared to a treated and/or an untreated control group before and after AT. A further three studies were identified with one lacking a control group and two measuring auditory processing in a manner not consistent with ASHA (2005) and AAA (2010). There is limited evidence that AT leads to measurable electrophysiological changes in children with auditory processing deficits. The evidence base is too small and weak to provide clear guidance on the use of electrophysiological outcomes as a measure of AT outcomes in children with auditory processing problems. The currently limited data can only be used to suggest that click-evoked AMLR and tone-burst evoked auditory P300 might be more likely to detect such outcomes in children diagnosed with (C)APD, and that speech-evoked ALLR might be more likely to detect phonological processing changes in children without a specific diagnosis of (C)APD.
Yathiraj, Asha; Maggu, Akshay Raj
The presence of auditory processing disorder in school-age children has been documented (Katz and Wilde, 1985; Chermak and Musiek, 1997; Jerger and Musiek, 2000; Muthuselvi and Yathiraj, 2009). In order to identify these children early, there is a need for a screening test that is not very time-consuming. The present study aimed to evaluate the independence of four subsections of the Screening Test for Auditory Processing (STAP) developed by Yathiraj and Maggu (2012). The test was designed to address auditory separation/closure, binaural integration, temporal resolution, and auditory memory in school-age children. The study also aimed to examine the number of children who are at risk for different auditory processes. Factor analysis research design was used in the current study. Four hundred school-age children consisting of 218 males and 182 females were randomly selected from 2400 children attending three schools. The children, aged 8 to 13 yr, were in grade three to eight class placements. DATA COLLECTION AND ANALYSES: The children were evaluated on the four subsections of the STAP (speech perception in noise, dichotic consonant-vowel [CV], gap detection, and auditory memory) in a quiet room within their school. The responses were analyzed using principal component analysis (PCA) and confirmatory factor analysis (CFA). In addition, the data were also analyzed to determine the number of children who were at risk for an auditory processing disorder (APD). Based on the PCA, three components with Eigen values greater than 1 were extracted. The orthogonal rotation of the variables using the Varimax technique revealed that component 1 consisted of binaural integration, component 2 consisted of temporal resolution, and component 3 was shared by auditory separation/closure and auditory memory. These findings were confirmed using CFA, where the predicted model displayed a good fit with or without the inclusion of the auditory memory subsection. It was determined that 16
Full Text Available Panx1 forms plasma membrane channels in brain and several other organs, including the inner ear. Biophysical properties, activation mechanisms and modulators of Panx1 channels have been characterized in detail, however the impact of Panx1 on auditory function is unclear due to conflicts in published results. To address this issue, hearing performance and cochlear function of the Panx1−/− mouse strain, the first with a reported global ablation of Panx1, were scrutinized. Male and female homozygous (Panx1−/−, hemizygous (Panx1+/− and their wild type (WT siblings (Panx1+/+ were used for this study. Successful ablation of Panx1 was confirmed by RT-PCR and Western immunoblotting in the cochlea and brain of Panx1−/− mice. Furthermore, a previously validated Panx1-selective antibody revealed strong immunoreactivity in WT but not in Panx1−/− cochleae. Hearing sensitivity, outer hair cell-based “cochlear amplifier” and cochlear nerve function, analyzed by auditory brainstem response (ABR and distortion product otoacoustic emission (DPOAE recordings, were normal in Panx1+/− and Panx1−/− mice. In addition, we determined that global deletion of Panx1 impacts neither on connexin expression, nor on gap-junction coupling in the developing organ of Corti. Finally, spontaneous intercellular Ca2+ signal (ICS activity in organotypic cochlear cultures, which is key to postnatal development of the organ of Corti and essential for hearing acquisition, was not affected by Panx1 ablation. Therefore, our results provide strong evidence that, in mice, Panx1 is dispensable for hearing acquisition and auditory function.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speechtherapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30, 2002 compared with the new auditory processing test done on May 13, 2003, after one year of therapy directed to acoustic stimulation of auditory abilities disorders, in a...
Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng
To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014
Kelly H. Chang
Full Text Available Here we show that, using functional magnetic resonance imaging (fMRI blood-oxygen level dependent (BOLD responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”. By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Pannese, Alessia; Herrmann, Christoph S; Sussman, Elyse
Detecting regularity and change in the environment is crucial for survival, as it enables making predictions about the world and informing goal-directed behavior. In the auditory modality, the detection of regularity involves segregating incoming sounds into distinct perceptual objects (stream segregation). The detection of change from this within-stream regularity is associated with the mismatch negativity, a component of auditory event-related brain potentials (ERPs). A central unanswered question is how the detection of regularity and the detection of change are interrelated, and whether attention affects the former, the latter, or both. Here we show that the detection of regularity and the detection of change can be empirically dissociated, and that attention modulates the detection of change without precluding the detection of regularity, and the perceptual organization of the auditory background into distinct streams. By applying frequency spectra analysis on the EEG of subjects engaged in a selective listening task, we found distinct peaks of ERP synchronization, corresponding to the rhythm of the frequency streams, independently of whether the stream was attended or ignored. Our results provide direct neurophysiological evidence of regularity detection in the auditory background, and show that it can occur independently of change detection and in the absence of attention.
Full Text Available The midbrain nucleus mesencephalicus lateralis pars dorsalis (MLd is thought to be the avian homologue of the central nucleus of the mammalian inferior colliculus. As such, it is a major relay in the ascending auditory pathway of all birds and in songbirds mediates the auditory feedback necessary for the learning and maintenance of song. To clarify the organization of MLd, we applied three calcium binding protein antibodies to tissue sections from the brains of adult male and female zebra finches. The staining patterns resulting from the application of parvalbumin, calbindin and calretinin antibodies differed from each other and in different parts of the nucleus. Parvalbumin-like immunoreactivity was distributed throughout the whole nucleus, as defined by the totality of the terminations of brainstem auditory afferents; in other words parvalbumin-like immunoreactivity defines the boundaries of MLd. Staining patterns of parvalbumin, calbindin and calretinin defined two regions of MLd: inner (MLd.I and outer (MLd.O. MLd.O largely surrounds MLd.I and is distinct from the surrounding intercollicular nucleus. Unlike the case in some non-songbirds, however, the two MLd regions do not correspond to the terminal zones of the projections of the brainstem auditory nuclei angularis and laminaris, which have been found to overlap substantially throughout the nucleus in zebra finches.
Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.
Blamey, P J; Cowan, R S; Alcantara, J I; Whitford, L A; Clark, G M
Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor...
Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten
Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...
Cooper, Judith A.; Ferry, Peggy C.
The paper presents a review of cases of children with acquired aphasia with convulsive disorder and discusses clinical features of three additional children in whom the specific syndrome of auditory verbal agnosia was identified. (Author/CL)
Bakker, Mirte J.; Boer, Frits; Benninga, Marc A.; Koelman, Johannes H. T. M.; Tijssen, Marina A. J.
Objective To test the hypothesis that children with abdominal pain-related functional gastrointestinal disorders have a general hypersensitivity for sensory stimuli. Study design Auditory startle reflexes were assessed in 20 children classified according to Rome III classifications of abdominal
Paschoal, Carolina Pamplona; Azevedo, Marisa Frasson de
Smoking is a public health concern and we are still unsure of its relation with auditory problems. To study the effects of cigarette smoking in auditory thresholds, in otoacoustic emissions and in their inhibition by the efferent olivocochlear medial system. 144 adults from both genders, between 20 and 31 years of age, smoking and non-smoking individuals were submitted to conventional and high-frequency audiometry, transient stimuli otoacoustic emissions and suppression effect investigation. smokers presented worse auditory thresholds in the frequencies of 12.500Hz in the right ear and 14,000 kHz in both ears. Regarding the otoacoustic emissions, smokers group presented a lower response level in the frequencies of 1,000Hz in both ears and 4,000Hz in the left ear. Among smokers there were more cases of cochlear dysfunction and tinnitus. Our results suggest that cigarette smoking has an adverse effect on the auditory system.
José Luis Ventura-León
Full Text Available The purpose of the study is determine the relationship between a group of writing tasks and the immediate auditory memory, as well as to establish differences according to sex and level of study. Two hundred and three schoolchildren of fifth and sixth of elementary education from Lima (Peru participated, they were selected by a non-probabilistic sample. The Immediate Auditory Memory Test and the Battery for Evaluation of Writing Processes (known in Spanish as PROESC were used. Central tendency measures were used for descriptive analysis. We employed the Mann-Whitney U test, Spearman Rho test and probability of superiority as effect size measurement for the inferential analysis. The results indicated a moderate direct and significant correlation between writing tasks and immediate auditory memory in general way and low correlations between dimensions. Finally, it showed that the differences in immediate auditory memory and writing tasks according to sex and level of study does not have practical significance.
Cappagli, Giulia; Gori, Monica
For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liemburg, Edith J.; Vercammen, Ans; Ter Horst, Gert J.; Curcic-Blake, Branislava; Knegtering, Henderikus; Aleman, Andre
Brain circuits involved in language processing have been suggested to be compromised in patients with schizophrenia. This does not only include regions subserving language production and perception, but also auditory processing and attention. We investigated resting state network connectivity of
The paper provides an overview of audiological terms and types of hearing impairments to help teachers of visually impaired preschoolers work more effectively with audiologists. Both functional auditory assessment and formal audiometric evaluations are discussed. (Author/CL)
primaquine, atovaquone and pentamidine isethionate.1,3 Although there is usually resolution of symptoms after medical therapy, surgical excision of lesions in the external auditory canal may be clinically indicated. Disseminated pneumocystosis, like ...
Stewart, Ian; Lavelle, Niamh
This study extended previous research on stimulus equivalence with all auditory stimuli by using a methodology more similar to conventional match-to-sample training and testing for three 3-member equivalence relations...
Sparreboom, M.; Beynon, A.J.; Snik, A.F.M.; Mylanus, E.A.M.
OBJECTIVE: To assess the effect of sequential bilateral cochlear implantation on auditory, cortical maturation after various periods of unilateral cochlear implant use. STUDY DESIGN: Prospective cohort study. SETTING: Tertiary academic referral center. PATIENTS: Thirty prelingually deaf children,
enhanced relative to the non-musicians for both resolved and unresolved harmonics in the right auditory cortex, right frontal regions and inferior colliculus. However, the increase in neural activation in the right auditory cortex of musicians was predictive of the increased pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... of training, which seemed to be specific to the stimuli containing resolved harmonics. Finally, a functional magnetic resonance imaging paradigm was used to examine the response of the auditory cortex to resolved and unresolved harmonics in musicians and non-musicians. The neural responses in musicians were...
Ainsworth, Matthew; Lee, Shane; Cunningham, Mark O; Roopun, Anita K; Traub, Roger D; Kopell, Nancy J; Whittington, Miles A
.... Here we show that, for inhibition-based gamma rhythms in vitro in rat neocortical slices, mechanistically distinct local circuit generators exist in different laminae of rat primary auditory cortex...
Murphy, C.F.B; Schochat, E
Objective: To analyze the effect of nonverbal auditory training on reading and phonological awareness tasks in children with dyslexia and the effect of age in relation to post-training learning considering the ages from 7 to 14. Methods...
Sadovsky, Alexander J; MacLean, Jason N
Mapping the flow of activity through neocortical microcircuits provides key insights into the underlying circuit architecture. Using a comparative analysis we determined the extent to which the dynamics of microcircuits in mouse primary somatosensory barrel field (S1BF) and auditory (A1) neocortex generalize. We imaged the simultaneous dynamics of up to 1126 neurons spanning multiple columns and layers using high-speed multiphoton imaging. The temporal progression and reliability of reactivation of circuit events in both regions suggested common underlying cortical design features. We used circuit activity flow to generate functional connectivity maps, or graphs, to test the microcircuit hypothesis within a functional framework. S1BF and A1 present a useful test of the postulate as both regions map sensory input anatomically, but each area appears organized according to different design principles. We projected the functional topologies into anatomical space and found benchmarks of organization that had been previously described using physiology and anatomical methods, consistent with a close mapping between anatomy and functional dynamics. By comparing graphs representing activity flow we found that each region is similarly organized as highlighted by hallmarks of small world, scale free, and hierarchical modular topologies. Models of prototypical functional circuits from each area of cortex were sufficient to recapitulate experimentally observed circuit activity. Convergence to common behavior by these models was accomplished using preferential attachment to scale from an auditory up to a somatosensory circuit. These functional data imply that the microcircuit hypothesis be framed as scalable principles of neocortical circuit design.
Lucker, Jay R
Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done
Full Text Available Pedro H Pondé,1 Eduardo P de Sena,2 Joan A Camprodon,3 Arão Nogueira de Araújo,2 Mário F Neto,4 Melany DiBiasi,5 Abrahão Fontes Baptista,6,7 Lidia MVR Moura,8 Camila Cosmo2,3,6,9,10 1Dynamics of Neuromusculoskeletal System Laboratory, Bahiana School of Medicine and Public Health, 2Postgraduate Program in Interactive Process of Organs and Systems, Federal University of Bahia, Salvador, Bahia, Brazil; 3Laboratory for Neuropsychiatry and Neuromodulation and Transcranial Magnetic Stimulation Clinical Service, Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 4Scientific Training Center Department, School of Medicine of Bahia, Federal University of Bahia, Salvador, Bahia, Brazil; 5Neuromodulation Center, Spaulding Rehabilitation Hospital, Harvard Medical School, Boston, MA, USA; 6Functional Electrostimulation Laboratory, Biomorphology Department, 7Postgraduate Program on Medicine and Human Health, School of Medicine, Federal University of Bahia, Salvador, Bahia, Brazil; 8Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; 9Center for Technological Innovation in Rehabilitation, Federal University of Bahia, 10Bahia State Health Department (SESAB, Salvador, Bahia, Brazil Introduction: Auditory hallucinations are defined as experiences of auditory perceptions in the absence of a provoking external stimulus. They are the most prevalent symptoms of schizophrenia with high capacity for chronicity and refractoriness during the course of disease. The transcranial direct current stimulation (tDCS – a safe, portable, and inexpensive neuromodulation technique – has emerged as a promising treatment for the management of auditory hallucinations. Objective: The aim of this study is to analyze the level of evidence in the literature available for the use of tDCS as a treatment for auditory hallucinations in schizophrenia. Methods: A systematic review was performed
Champoux, François; Shiller, Douglas M; Zatorre, Robert J
In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.
Full Text Available In the present study, we demonstrate an audiotactile effect in which amplitude modulation of auditory feedback during voiced speech induces a throbbing sensation over the lip and laryngeal regions. Control tasks coupled with the examination of speech acoustic parameters allow us to rule out the possibility that the effect may have been due to cognitive factors or motor compensatory effects. We interpret the effect as reflecting the tight interplay between auditory and tactile modalities during vocal production.
Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi
To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Simoens, Veerle L; Tervaniemi, Mari
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Veerle L Simoens
Full Text Available Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Simoens, Veerle L.; Tervaniemi, Mari
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487
Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.
Hannah L. Golden
Full Text Available Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13 and age-matched healthy individuals (n = 17 underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.
Auditory verbal hallucinations have attracted a great deal of scientific interest, but despite the fact that they are fundamentally a social experience?in essence, a form of hallucinated communication?current theories remain firmly rooted in an individualistic account and have largely avoided engagement with social cognition. Nevertheless, there is mounting evidence for the role of social cognitive and social neurocognitive processes in auditory verbal hallucinations, and, consequently, it is...
Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)
This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by
Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.
SUMMARY 30 Schizophrenics having verbal auditory hallucinations and satisfying the criteria of Feighner et al. (1972) were examined for the experienced reality of auditory hallucinations and the influence of certain variables on such reality. Number of hallucinating days per month, fast movement of time during hallucination, presence of running commentary voices, interference in self-care and social activities due to the Voices and degree of success in manipulation and avoidance (coping theme...
Brown, Rachel M.; Caroline ePalmer
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced nov...
Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R
Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
M. M. Ghasemi
Full Text Available Background:The aim of this study was to determine the auditory performance of congenitally deaf children and the effect of cochlear implantation (CI on speech intelligibility.Methods:Aprospective study was undertaken on 47 children in a pediatric tertiary referral center for CI.All children were deaf prelingually and were younger than 8 years of age.They were followed up until 5 years after implantation. Auditory performance was assessed by using the categories of auditory performance (CAP scale and speech intelligibility rating which evaluated the spontaneous speech of each child before and at frequent intervals for five years after implantation.Results:Pre-lingually deaf children showed significant improvement in auditory performance after implantation.Six months after implantation 91% of children had the ability to respond to speech sounds.At the end of year one, 96% of children could discriminate speech sounds and 84% of children who reached the three year interval could understand common phrases without lip-reading. After cochlear implantation,the difference between the speech intelligibility rating increased significantly each year for 3 years (p<0.05 and did not plateau up to 5 years after implantation. The changes in auditory performance and speech development were parallel.Conclusion:The results indicated the ability of cochlear implantations to significantly improve auditory receptive skills and subsequently speech development in young congenitally deaf children.
Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Clarke, Dave F; Boop, Frederick A; McGregor, Amy L; Perkins, F Frederick; Brewer, Vickie R; Wheless, James W
Ear plugging (placing fingers in or covering the ears) is a clinical seizure semiology that has been described as a response to an unformed, auditory hallucination localized to the superior temporal neocortex. The localizing value of ear plugging in more complex auditory hallucinations may have more involved circuitry. We report on one child, whose aura was a more complex auditory phenomenon, consisting of a door opening and closing, getting louder as the ictus persisted. This child presented, at four years of age, with brief episodes of ear plugging followed by an acute emotional change that persisted until surgical resection of a left mesial frontal lesion at 11 years of age. Scalp video-EEG, magnetic resource imaging, magnetoencephalography, and invasive video-EEG monitoring were carried out. The scalp EEG changes always started after clinical onset. These were not localizing, and encompassed a wide field over the bi-frontal head regions, the left side predominant over the right. Intracranial video-EEG monitoring with subdural electrodes over both frontal and temporal regions localized the seizure-onset to the left mesial frontal lesion. The patient has remained seizure-free since the resection on June 28, 2006, approximately one and a half years ago. Ear plugging in response to simple auditory auras localize to the superior temporal gyrus. If the patient has more complex, formed auditory auras, not only may the secondary auditory areas in the temporal lobe be involved, but one has to entertain the possibility of ictal-onset from the frontal cortex.
Paul Wallace Anderson
Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.
Full Text Available Background and Aim: Auditory neuropathy (AN can be diagnosed by abnormal auditory brainstem response (ABR, in the presence of normal cochlear microphonic (CM and otoacoustic emissions (OAEs.The aim of this study was to investigate the ABR and other electrodiagnostic test results of 6 patients suspicious to AN with problems in speech recognition. Materials and Methods: this cross sectional study was conducted on 6 AN patients with different ages evaluated by pure tone audiometry, speech discrimination score (SDS , immittance audiometry. ElectroCochleoGraphy , ABR, middle latency response (MLR, Late latency response (LLR, and OAEs. Results: Behavioral pure tone audiometric tests showed moderate to profound hearing loss. SDS was so poor which is not in accordance with pure tone thresholds. All patients had normal tympanogram but absent acoustic reflexes. CMs and OAEs were within normal limits. There was no contra lateral suppression of OAEs. None of cases had normal ABR or MLR although LLR was recorded in 4. Conclusion: All patients in this study are typical cases of auditory neuropathy. Despite having abnormal input, LLR remains normal that indicates differences in auditory evoked potentials related to required neural synchrony. These findings show that auditory cortex may play a role in regulating presentation of deficient signals along auditory pathways in primary steps.