WorldWideScience

Sample records for auditory temporal processing

  1. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  2. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  3. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  4. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  5. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949

  6. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  7. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  8. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h

  9. Deficit of auditory temporal processing in children with dyslexia-dysgraphia

    Directory of Open Access Journals (Sweden)

    Sima Tajik

    2012-12-01

    Full Text Available Background and Aim: Auditory temporal processing reveals an important aspect of auditory performance, in which a deficit can prevent the child from speaking, language learning and reading. Temporal resolution, which is a subgroup of temporal processing, can be evaluated by gap-in-noise detection test. Regarding the relation of auditory temporal processing deficits and phonologic disorder of children with dyslexia-dysgraphia, the aim of this study was to evaluate these children with the gap-in-noise (GIN test.Methods: The gap-in-noise test was performed on 28 normal and 24 dyslexic-dysgraphic children, at the age of 11-12 years old. Mean approximate threshold and percent of corrected answers were compared between the groups.Results: The mean approximate threshold and percent of corrected answers of the right and left ear had no significant difference between the groups (p>0.05. The mean approximate threshold of children with dyslexia-dysgraphia (6.97 ms, SD=1.09 was significantly (p<0.001 more than that of the normal group (5.05 ms, SD=0.92. The mean related frequency of corrected answers (58.05, SD=4.98% was less than normal group (69.97, SD=7.16% (p<0.001.Conclusion: Abnormal temporal resolution was found in children with dyslexia-dysgraphia based on gap-in-noise test. While the brainstem and auditory cortex are responsible for auditory temporal processing, probably the structural and functional differences of these areas in normal and dyslexic-dysgraphic children lead to abnormal coding of auditory temporal information. As a result, auditory temporal processing is inevitable.

  10. Carrier-dependent temporal processing in an auditory interneuron.

    Science.gov (United States)

    Sabourin, Patrick; Gottlieb, Heather; Pollack, Gerald S

    2008-05-01

    Signal processing in the auditory interneuron Omega Neuron 1 (ON1) of the cricket Teleogryllus oceanicus was compared at high- and low-carrier frequencies in three different experimental paradigms. First, integration time, which corresponds to the time it takes for a neuron to reach threshold when stimulated at the minimum effective intensity, was found to be significantly shorter at high-carrier frequency than at low-carrier frequency. Second, phase locking to sinusoidally amplitude modulated signals was more efficient at high frequency, especially at high modulation rates and low modulation depths. Finally, we examined the efficiency with which ON1 detects gaps in a constant tone. As reflected by the decrease in firing rate in the vicinity of the gap, ON1 is better at detecting gaps at low-carrier frequency. Following a gap, firing rate increases beyond the pre-gap level. This "rebound" phenomenon is similar for low- and high-carrier frequencies.

  11. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  12. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery.

  13. Auditory Processing Disorders

    Science.gov (United States)

    Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...

  14. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  15. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of aud...

  16. Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure.

    Science.gov (United States)

    Moore, Brian C J

    2016-01-01

    Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each of which can be considered as a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV and TFS is conveyed in the timing and short-term rate of nerve spikes in the auditory nerve. There is evidence that both hearing loss and increasing age adversely affect the ability to use TFS information, but in many studies the effects of hearing loss and age have been confounded. This paper summarises evidence from studies that allow some separation of the effects of hearing loss and age. The results suggest that the monaural processing of TFS information, which is important for the perception of pitch and for segregating speech from background sounds, is adversely affected by both hearing loss and increasing age, the former being more important. The monaural processing of ENV information is hardly affected by hearing loss or by increasing age. The binaural processing of TFS information, which is important for sound localisation and the binaural masking level difference, is also adversely affected by both hearing loss and increasing age, but here the latter seems more important. The deterioration of binaural TFS processing with increasing age appears to start relatively early in life. The binaural processing of ENV information also deteriorates somewhat with increasing age. The reduced binaural processing abilities found for older/hearing-impaired listeners may partially account for the difficulties that such listeners experience in situations where the target speech and interfering sounds come from different directions in space, as is common in everyday life. PMID:27080640

  17. Auditory processing performance in blind people Desempenho do processamento auditivo temporal em uma população de cegos

    Directory of Open Access Journals (Sweden)

    Ludmilla Vilas Boas

    2011-08-01

    Full Text Available Hearing has an important role in human development and social adaptation in blind people. OBJECTIVE: To evaluate the performance of temporal auditory processing in blind people; to characterize the temporal resolution ability; to characterize the temporal ordinance ability and to compare the performance of the study population in the applied tests. METHODS: Fifteen blind adults participated in this study. A cross-sectional study was undertaken; approval was obtained from the Pernambuco Catholic University Ethics Committee, no. 003/2008. RESULTS: Temporal auditory processing was excellent - the average composed threshold in the original RGDT version was 4. 98 ms; it was 50 ms for all frequencies in the expanded version. PPS and DPS results ranged from 95% to 100%. There were no quantitative differences in the comparison of tests; but oral reports suggested that the original RGDT original version was more difficult. CONCLUSIONS: The study sample performed well in temporal auditory processing; it also performed well in temporal resolution and ordinance abilitiesA audição exerce um papel importantíssimo no desenvolvimento e adaptação social dos pacientes cegos. OBJETIVOS: Avaliar o desempenho do processamento temporal de cegos; caracterizar a habilidade de resolução temporal, segundo tempo e frequência; a ordenação temporal de cegos usando o teste de padrão de frequência e comparar o desempenho da população estudada para os testes de processamento aplicados. METODOLOGIA: Participaram do estudo 12 adultos portadores de cegueira. O estudo foi do tipo transversal, aprovado pelo Comitê de Ética da Universidade Católica de Pernambuco sob nº 003/2008. Para a coleta de dados, foi utilizado o RGDT em suas duas versões e os testes de padrão de duração (TPD e de frequência (TPF. RESULTADOS: Foi evidenciado excelente desempenho para o processamento temporal, média de 4,98 para o limiar composto na versão original do RGDT e 50 ms de

  18. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  19. Identified auditory neurons in the cricket Gryllus rubens: temporal processing in calling song sensitive units.

    Science.gov (United States)

    Farris, Hamilton E; Mason, Andrew C; Hoy, Ronald R

    2004-07-01

    This study characterizes aspects of the anatomy and physiology of auditory receptors and certain interneurons in the cricket Gryllus rubens. We identified an 'L'-shaped ascending interneuron tuned to frequencies > 15 kHz (57 dB SPL threshold at 20 kHz). Also identified were two intrasegmental 'omega'-shaped interneurons that were broadly tuned to 3-65 kHz, with best sensitivity to frequencies of the male calling song (5 kHz, 52 dB SPL). The temporal sensitivity of units excited by calling song frequencies were measured using sinusoidally amplitude modulated stimuli that varied in both modulation rate and depth, parameters that vary with song propagation distance and the number of singing males. Omega cells responded like low-pass filters with a time constant of 42 ms. In contrast, receptors significantly coded modulation rates up to the maximum rate presented (85 Hz). Whereas omegas required approximately 65% modulation depth at 45 Hz (calling song AM) to elicit significant synchrony coding, receptors tolerated a approximately 50% reduction in modulation depth up to 85 Hz. These results suggest that omega cells in G. rubens might not play a role in detecting song modulation per se at increased distances from a singing male.

  20. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...... on the stream segregation process was analysed. The model analysis showed that auditory frequency selectivity and physiological forward masking play a significant role in stream segregation based on frequency separation and tone rate. Secondly, the model analysis suggested that neural adaptation...

  1. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  2. Species specificity of temporal processing in the auditory midbrain of gray treefrogs: long-interval neurons.

    Science.gov (United States)

    Hanson, Jessica L; Rose, Gary J; Leary, Christopher J; Graham, Jalina A; Alluri, Rishi K; Vasquez-Opazo, Gustavo A

    2016-01-01

    In recently diverged gray treefrogs (Hyla chrysoscelis and H. versicolor), advertisement calls that differ primarily in pulse shape and pulse rate act as an important premating isolation mechanism. Temporally selective neurons in the anuran inferior colliculus may contribute to selective behavioral responses to these calls. Here we present in vivo extracellular and whole-cell recordings from long-interval-selective neurons (LINs) made during presentation of pulses that varied in shape and rate. Whole-cell recordings revealed that interplay between excitation and inhibition shapes long-interval selectivity. LINs in H. versicolor showed greater selectivity for slow-rise pulses, consistent with the slow-rise pulse characteristics of their calls. The steepness of pulse-rate tuning functions, but not the distributions of best pulse rates, differed between the species in a manner that depended on whether pulses had slow or fast-rise shape. When tested with stimuli representing the temporal structure of the advertisement calls of H. chrysoscelis or H. versicolor, approximately 27 % of LINs in H. versicolor responded exclusively to the latter stimulus type. The LINs of H. chrysoscelis were less selective. Encounter calls, which are produced at similar pulse rates in both species (≈5 pulses/s), are likely to be effective stimuli for the LINs of both species. PMID:26614093

  3. Calcium-dependent control of temporal processing in an auditory interneuron: a computational analysis.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2010-09-01

    Sensitivity to acoustic amplitude modulation in crickets differs between species and depends on carrier frequency (e.g., calling song vs. bat-ultrasound bands). Using computational tools, we explore how Ca(2+)-dependent mechanisms underlying selective attention can contribute to such differences in amplitude modulation sensitivity. For omega neuron 1 (ON1), selective attention is mediated by Ca(2+)-dependent feedback: [Ca(2+)](internal) increases with excitation, activating a Ca(2+)-dependent after-hyperpolarizing current. We propose that Ca(2+) removal rate and the size of the after-hyperpolarizing current can determine ON1's temporal modulation transfer function (TMTF). This is tested using a conductance-based simulation calibrated to responses in vivo. The model shows that parameter values that simulate responses to single pulses are sufficient in simulating responses to modulated stimuli: no special modulation-sensitive mechanisms are necessary, as high and low-pass portions of the TMTF are due to Ca(2+)-dependent spike frequency adaptation and post-synaptic potential depression, respectively. Furthermore, variance in the two biophysical parameters is sufficient to produce TMTFs of varying bandwidth, shifting amplitude modulation sensitivity like that in different species and in response to different carrier frequencies. Thus, the hypothesis that the size of after-hyperpolarizing current and the rate of Ca(2+) removal can affect amplitude modulation sensitivity is computationally validated.

  4. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  5. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  6. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM...

  7. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Science.gov (United States)

    Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C

    2015-01-01

    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys. PMID:26041980

  8. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...

  9. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  10. Left temporal lobe structural and functional abnormality underlying auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2009-05-01

    Full Text Available In this article, we review recent findings from our laboratory that auditory hallucinations in schizophrenia are internally generated speech mis-representations lateralized to the left superior temporal gyrus and sulcus. Such experiences are, moreover, not cognitively suppressed due to enhanced attention to the voices and failure of fronto-parietal executive control functions. An overview of diagnostic questionnaires for scoring of symptoms is presented, together with a review of behavioural, structural and functional MRI data. Functional imaging data have either shown increased or decreased activation depending on whether patients have been presented an external stimulus or not during scanning. Structural imaging data have shown reduction of grey matter density and volume in the same areas in the temporal lobe. The behavioral and neuroimaging findings are moreover hypothesized to be related to glutamate hypofunction in schizophrenia. We propose a model for the understanding of auditory hallucinations that trace the origin of auditory hallucinations to uncontrolled neuronal firing in the speech areas in the left temporal lobe, which is not suppressed by volitional cognitive control processes, due to dysfunctional fronto-parietal executive cortical networks.

  11. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  12. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    , much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...

  13. Auditory processing models

    DEFF Research Database (Denmark)

    Dau, Torsten

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook will pr...

  14. 功能磁共振成像观察左颞前部在汉语听觉词加工中的机制%Functional MRI observation on auditory Chinese lexical processing mechanism in left anterior temporal lobe

    Institute of Scientific and Technical Information of China (English)

    王晓怡; 卢洁; 李坤成; 张苗; 徐国庆; 舒华

    2011-01-01

    Objective To explore the neural mechanism for auditory Chinese lexical processing in the left anterior temporal lobe (ATL) of the healthy participants with functional magnetic imaging (fMRI). Methods Fifteen right-handed healthy participants, including 5 males and 10 females, were asked to repeat the auditory words or judge whether the auditory items were semantically dangerous. AFNI was used to process fMRI data and localize functional areas and the difference in the anterior temporal lobe. Results The results revealed the phonological processing on auditory Chinese lexical information was located in the anterior superior temporal gyrus, and the semantic processing was located in the anterior middle temporal gyrus. There existed segregation between the phonological processing and the semantic processing of the auditory Chinese words. Conclusion There was the function of semantic integration in the ATL. Two pathways to semantic access include the direct pathway in the dorsal temporal lobe for repetition task and the indirect in ventral temporal lobe for semantic judgment task.%目的 探讨左侧颞前部在汉语听觉信息加工中的作用机制.方法 应用3.0T磁共振成像系统与标准头线圈对15名健康志愿者(男5名,女10名)进行功能磁共振成像(fMRI).要求受试者完成听觉复述任务和听觉语义危险判断任务.应用软件包AFNI分析两种听觉任务在左颞前部的任务功能定位及其差异.结果 正常成人听觉语义判断任务相比听觉复述任务更多激活左侧颞中回及颞下回前部,而听觉语音复述任务相比听觉语义判断任务更多激活左侧颞上回前部.结论 脑内存在左颞前部对汉语听觉语音语义信息加工的分离,颞上前部对语音分析更强,颞前中下部对语义分析更强.

  15. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties. PMID:24264076

  16. Multimodal Lexical Processing in Auditory Cortex Is Literacy Skill Dependent

    OpenAIRE

    McNorgan, Chris; Awati, Neha; Desroches, Amy S.; Booth, James R.

    2013-01-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others d...

  17. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  18. Effects of different types of auditory temporal training on language skills: a systematic review

    Directory of Open Access Journals (Sweden)

    Cristina Ferraz Borges Murphy

    2013-10-01

    Full Text Available Previous studies have investigated the effects of auditory temporal training on language disorders. Recently, the effects of new approaches, such as musical training and the use of software, have also been considered. To investigate the effects of different auditory temporal training approaches on language skills, we reviewed the available literature on musical training, the use of software and formal auditory training by searching the SciELO, MEDLINE, LILACS-BIREME and EMBASE databases. Study Design: Systematic review. Results: Using evidence levels I and II as the criteria, 29 of the 523 papers found were deemed relevant to one of the topics (use of software - 13 papers; formal auditory training - six papers; and musical training - 10 papers. Of the three approaches, studies that investigated the use of software and musical training had the highest levels of evidence; however, these studies also raised concerns about the hypothesized relationship between auditory temporal processing and language. Future studies are necessary to investigate the actual contribution of these three types of auditory temporal training to language skills.

  19. Spatial auditory processing in pinnipeds

    Science.gov (United States)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study

  20. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    Science.gov (United States)

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  1. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    Science.gov (United States)

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. PMID:24388435

  2. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  3. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  4. Temporal pattern recognition based on instantaneous spike rate coding in a simple auditory system.

    Science.gov (United States)

    Nabatiyan, A; Poulet, J F A; de Polavieja, G G; Hedwig, B

    2003-10-01

    Auditory pattern recognition by the CNS is a fundamental process in acoustic communication. Because crickets communicate with stereotyped patterns of constant frequency syllables, they are established models to investigate the neuronal mechanisms of auditory pattern recognition. Here we provide evidence that for the neural processing of amplitude-modulated sounds, the instantaneous spike rate rather than the time-averaged neural activity is the appropriate coding principle by comparing both coding parameters in a thoracic interneuron (Omega neuron ON1) of the cricket (Gryllus bimaculatus) auditory system. When stimulated with different temporal sound patterns, the analysis of the instantaneous spike rate demonstrates that the neuron acts as a low-pass filter for syllable patterns. The instantaneous spike rate is low at high syllable rates, but prominent peaks in the instantaneous spike rate are generated as the syllable rate resembles that of the species-specific pattern. The occurrence and repetition rate of these peaks in the neuronal discharge are sufficient to explain temporal filtering in the cricket auditory pathway as they closely match the tuning of phonotactic behavior to different sound patterns. Thus temporal filtering or "pattern recognition" occurs at an early stage in the auditory pathway.

  5. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  6. Rapid auditory learning of temporal gap detection.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2016-07-01

    The rapid initial phase of training-induced improvement has been shown to reflect a genuine sensory change in perception. Several features of early and rapid learning, such as generalization and stability, remain to be characterized. The present study demonstrated that learning effects from brief training on a temporal gap detection task using spectrally similar narrowband noise markers defining the gap (within-channel task), transfer across ears, however, not across spectrally dissimilar markers (between-channel task). The learning effects associated with brief training on a gap detection task were found to be stable for at least a day. These initial findings have significant implications for characterizing early and rapid learning effects. PMID:27475211

  7. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole. PMID:20881120

  8. Temporal coding by populations of auditory receptor neurons.

    Science.gov (United States)

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  9. How modality specific is processing of auditory and visual rhythms?

    Science.gov (United States)

    Pasinski, Amanda C; McAuley, J Devin; Snyder, Joel S

    2016-02-01

    The present study used ERPs to test the extent to which temporal processing is modality specific or modality general. Participants were presented with auditory and visual temporal patterns that consisted of initial two- or three-event beginning patterns. This delineated a constant standard time interval, followed by a two-event ending pattern delineating a variable test interval. Participants judged whether they perceived the pattern as a whole to be speeding up or slowing down. The contingent negative variation (CNV), a negative potential reflecting temporal expectancy, showed a larger amplitude for the auditory modality compared to the visual modality but a high degree of similarity in scalp voltage patterns across modalities, suggesting that the CNV arises from modality-general processes. A late, memory-dependent positive component (P3) also showed similar patterns across modalities.

  10. Auditory processing efficiency deficits in children with developmental language impairments

    Science.gov (United States)

    Hartley, Douglas E. H.; Moore, David R.

    2002-12-01

    The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.

  11. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus;

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  12. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. PMID:27255816

  13. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging.

  14. The effect of exogenous spatial attention on auditory information processing.

    OpenAIRE

    Kanai, Kenichi; Ikeda, Kazuo; Tayama, Tadayuki

    2007-01-01

    This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which wa...

  15. Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control

    Science.gov (United States)

    Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun

    2016-01-01

    Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768

  16. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  17. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  18. Heritability of non-speech auditory processing skills.

    Science.gov (United States)

    Brewer, Carmen C; Zalewski, Christopher K; King, Kelly A; Zobay, Oliver; Riley, Alison; Ferguson, Melanie A; Bird, Jonathan E; McCabe, Margaret M; Hood, Linda J; Drayna, Dennis; Griffith, Andrew J; Morell, Robert J; Friedman, Thomas B; Moore, David R

    2016-08-01

    Recent insight into the genetic bases for autism spectrum disorder, dyslexia, stuttering, and language disorders suggest that neurogenetic approaches may also reveal at least one etiology of auditory processing disorder (APD). A person with an APD typically has difficulty understanding speech in background noise despite having normal pure-tone hearing sensitivity. The estimated prevalence of APD may be as high as 10% in the pediatric population, yet the causes are unknown and have not been explored by molecular or genetic approaches. The aim of our study was to determine the heritability of frequency and temporal resolution for auditory signals and speech recognition in noise in 96 identical or fraternal twin pairs, aged 6-11 years. Measures of auditory processing (AP) of non-speech sounds included backward masking (temporal resolution), notched noise masking (spectral resolution), pure-tone frequency discrimination (temporal fine structure sensitivity), and nonsense syllable recognition in noise. We provide evidence of significant heritability, ranging from 0.32 to 0.74, for individual measures of these non-speech-based AP skills that are crucial for understanding spoken language. Identification of specific heritable AP traits such as these serve as a basis to pursue the genetic underpinnings of APD by identifying genetic variants associated with common AP disorders in children and adults. PMID:26883091

  19. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  20. Visual–auditory spatial processing in auditory cortical neurons

    OpenAIRE

    Bizley, Jennifer K.; King, Andrew J

    2008-01-01

    Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...

  1. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... their audibility when embedded in similar background interferers, a phenomenon referred to as comodulation masking release (CMR). Knowledge of the auditory processing of amplitude modulations provides therefore crucial information for a better understanding of how the auditory system analyses acoustic scenes...... chapter introduces a processing stage, in which information from different peripheral frequency channels is combined. This so-called across-channel processing is assumed to take place at the output of a modulation filterbank, and is crucial in order to account for CMR conditions where the frequency...

  2. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  3. Spectral features control temporal plasticity in auditory cortex.

    Science.gov (United States)

    Kilgard, M P; Pandya, P K; Vazquez, J L; Rathbun, D L; Engineer, N D; Moucha, R

    2001-01-01

    Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.

  4. Auditory priming of frequency and temporal information: Effects of lateralized presentation

    OpenAIRE

    List, Alexandra; Justus, Timothy

    2007-01-01

    Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...

  5. AUDITORY CORTICAL PLASTICITY: DOES IT PROVIDE EVIDENCE FOR COGNITIVE PROCESSING IN THE AUDITORY CORTEX?

    OpenAIRE

    Irvine, Dexter R. F.

    2007-01-01

    The past 20 years have seen substantial changes in our view of the nature of the processing carried out in auditory cortex. Some processing of a cognitive nature, previously attributed to higher order “association” areas, is now considered to take place in auditory cortex itself. One argument adduced in support of this view is the evidence indicating a remarkable degree of plasticity in the auditory cortex of adult animals. Such plasticity has been demonstrated in a wide range of paradigms, i...

  6. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing. PMID:23626523

  7. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing.

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing.

  8. Repetition suppression in auditory-motor regions to pitch and temporal structure in music.

    Science.gov (United States)

    Brown, Rachel M; Chen, Joyce L; Hollinger, Avrum; Penhune, Virginia B; Palmer, Caroline; Zatorre, Robert J

    2013-02-01

    Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.

  9. Neural Representations of Complex Temporal Modulations in the Human Auditory Cortex

    OpenAIRE

    Ding, Nai; Simon, Jonathan Z.

    2009-01-01

    Natural sounds such as speech contain multiple levels and multiple types of temporal modulations. Because of nonlinearities of the auditory system, however, the neural response to multiple, simultaneous temporal modulations cannot be predicted from the neural responses to single modulations. Here we show the cortical neural representation of an auditory stimulus simultaneously frequency modulated (FM) at a high rate, fFM ≈ 40 Hz, and amplitude modulation (AM) at a slow rate, fAM

  10. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  11. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria

    2016-09-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." PMID:27325682

  12. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  13. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    Science.gov (United States)

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.

  14. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    Science.gov (United States)

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  15. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control of...... the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been...

  16. Resolução temporal auditiva em idosos Auditory temporal resolution in elderly people

    Directory of Open Access Journals (Sweden)

    Flávia Duarte Liporaci

    2010-12-01

    Full Text Available OBJETIVO: Avaliar o processamento auditivo em idosos por meio do teste de resolução temporal Gaps in Noise e verificar se a presença de perda auditiva influencia no desempenho nesse teste. MÉTODOS: Sessenta e cinco ouvintes idosos, entre 60 e 79 anos, foram avaliados por meio do teste Gaps In Noise. Para seleção da amostra foram realizados: anamnese, mini-exame do estado mental e avaliação audiológica básica. Os participantes foram alocados e estudados em um grupo único e posteriormente divididos em três grupos segundo os resultados audiométricos nas frequências de 500 Hz, 1, 2, 3, 4 e 6 kHz. Assim, classificou-se o G1 com audição normal, o G2 com perda auditiva de grau leve e o G3 com perda auditiva de grau moderado. RESULTADOS: Em toda a amostra, as médias de limiar de detecção de gap e de porcentagem de acertos foram de 8,1 ms e 52,6% para a orelha direita e de 8,2 ms e 52,2% para a orelha esquerda. No G1, estas medidas foram de 7,3 ms e 57,6% para a orelha direita e de 7,7 ms e 55,8% para a orelha esquerda. No G2, estas medidas foram de 8,2 ms e 52,5% para a orelha direita e de 7,9 ms e 53,2% para a orelha esquerda. No G3, estas medidas foram de 9,2 ms e 45,2% para as orelhas direita e esquerda. CONCLUSÃO: A presença de perda auditiva elevou os limiares de detecção de gap e diminuiu a porcentagem de acertos no teste Gaps In Noise.PURPOSE: To assess the auditory processing of elderly patients using the temporal resolution Gaps-in-Noise test, and to verify if the presence of hearing loss influences the performance on this test. METHODS: Sixty-five elderly listeners, with ages between 60 and 79 years, were assessed with the Gaps-in-Noise test. To meet the inclusion criteria, the following procedures were carried out: anamnesis, mini-mental state examination, and basic audiological evaluation. The participants were allocated and studied as a group, and then were divided into three groups, according to audiometric results

  17. Auditory Processing Theories of Language Disorders: Past, Present, and Future

    Science.gov (United States)

    Miller, Carol A.

    2011-01-01

    Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…

  18. On Optimality in Auditory Information Processing

    CERN Document Server

    Karlsson, M

    2000-01-01

    We study limits for the detection and estimation of weak sinusoidal signals in the primary part of the mammalian auditory system using a stochastic Fitzhugh-Nagumo (FHN) model and an action-reaction model for synaptic plasticity. Our overall model covers the chain from a hair cell to a point just after the synaptic connection with a cell in the cochlear nucleus. The information processing performance of the system is evaluated using so called phi-divergences from statistics which quantify a dissimilarity between probability measures and are intimately related to a number of fundamental limits in statistics and information theory (IT). We show that there exists a set of parameters that can optimize several important phi-divergences simultaneously and that this set corresponds to a constant quiescent firing rate (QFR) of the spiral ganglion neuron. The optimal value of the QFR is frequency dependent but is essentially independent of the amplitude of the signal (for small amplitudes). Consequently, optimal proce...

  19. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  20. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills. PMID:27013958

  1. Overview of Central Auditory Processing Deficits in Older Adults.

    Science.gov (United States)

    Atcherson, Samuel R; Nagaraj, Naveen K; Kennett, Sarah E W; Levisee, Meredith

    2015-08-01

    Although there are many reported age-related declines in the human body, the notion that a central auditory processing deficit exists in older adults has not always been clear. Hearing loss and both structural and functional central nervous system changes with advancing age are contributors to how we listen, hear, and process auditory information. Even older adults with normal or near normal hearing sensitivity may exhibit age-related central auditory processing deficits as measured behaviorally and/or electrophysiologically. The purpose of this article is to provide an overview of assessment and rehabilitative approaches for central auditory processing deficits in older adults. It is hoped that the outcome of the information presented here will help clinicians with older adult patients who do not exhibit the typical auditory processing behaviors exhibited by others at the same age and with comparable hearing sensitivity all in the absence of other health-related conditions. PMID:27516715

  2. A physiologically inspired model of auditory stream segregation based on a temporal coherence analysis

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2012-01-01

    The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422...... activity across frequency. Using this approach, the described model is able to quantitatively account for classical streaming phenomena relying on frequency separation and tone presentation rate, such as the temporal coherence boundary and the fission boundary [L. P. A. S. van Noorden, doctoral...... dissertation, Institute for Perception Research, Eindhoven, NL, (1975)]. The same model also accounts for the perceptual grouping of distant spectral components in the case of synchronous presentation. The most essential components of the front-end and back-end processing in the framework of the presented...

  3. Effects of deafness and cochlear implant use on temporal response characteristics in cat primary auditory cortex.

    Science.gov (United States)

    Fallon, James B; Shepherd, Robert K; Nayagam, David A X; Wise, Andrew K; Heffer, Leon F; Landry, Thomas G; Irvine, Dexter R F

    2014-09-01

    We have previously shown that neonatal deafness of 7-13 months duration leads to loss of cochleotopy in the primary auditory cortex (AI) that can be reversed by cochlear implant use. Here we describe the effects of a similar duration of deafness and cochlear implant use on temporal processing. Specifically, we compared the temporal resolution of neurons in AI of young adult normal-hearing cats that were acutely deafened and implanted immediately prior to recording with that in three groups of neonatally deafened cats. One group of neonatally deafened cats received no chronic stimulation. The other two groups received up to 8 months of either low- or high-rate (50 or 500 pulses per second per electrode, respectively) stimulation from a clinical cochlear implant, initiated at 10 weeks of age. Deafness of 7-13 months duration had no effect on the duration of post-onset response suppression, latency, latency jitter, or the stimulus repetition rate at which units responded maximally (best repetition rate), but resulted in a statistically significant reduction in the ability of units to respond to every stimulus in a train (maximum following rate). None of the temporal response characteristics of the low-rate group differed from those in acutely deafened controls. In contrast, high-rate stimulation had diverse effects: it resulted in decreased suppression duration, longer latency and greater jitter relative to all other groups, and an increase in best repetition rate and cut-off rate relative to acutely deafened controls. The minimal effects of moderate-duration deafness on temporal processing in the present study are in contrast to its previously-reported pronounced effects on cochleotopy. Much longer periods of deafness have been reported to result in significant changes in temporal processing, in accord with the fact that duration of deafness is a major factor influencing outcome in human cochlear implantees.

  4. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    Science.gov (United States)

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  5. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    Science.gov (United States)

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  6. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  7. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  8. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  9. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe.

    Science.gov (United States)

    Berns, Gregory S; Cook, Peter F; Foxley, Sean; Jbabdi, Saad; Miller, Karla L; Marino, Lori

    2015-07-22

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative' regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  10. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  11. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  12. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  13. Encoding of Temporal Information by Timing, Rate, and Place in Cat Auditory Cortex

    OpenAIRE

    Imaizumi, Kazuo; Priebe, Nicholas J.; Sharpee, Tatyana O.; Cheung, Steven W.; Schreiner, Christoph E.

    2010-01-01

    A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (...

  14. Predictive uncertainty in auditory sequence processing

    OpenAIRE

    Niels Chr.Hansen; MarcusT.Pearce

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic e...

  15. An auditory illusion of infinite tempo change based on multiple temporal levels.

    Directory of Open Access Journals (Sweden)

    Guy Madison

    Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.

  16. Polymodal information processing via temporal cortex Area 37 modeling

    Science.gov (United States)

    Peterson, James K.

    2004-04-01

    A model of biological information processing is presented that consists of auditory and visual subsystems linked to temporal cortex and limbic processing. An biologically based algorithm is presented for the fusing of information sources of fundamentally different modalities. Proof of this concept is outlined by a system which combines auditory input (musical sequences) and visual input (illustrations such as paintings) via a model of cortex processing for Area 37 of the temporal cortex. The training data can be used to construct a connectionist model whose biological relevance is suspect yet is still useful and a biologically based model which achieves the same input to output map through biologically relevant means. The constructed models are able to create from a set of auditory and visual clues a combined musical/ illustration output which shares many of the properties of the original training data. These algorithms are not dependent on these particular auditory/ visual modalities and hence are of general use in the intelligent computation of outputs that require sensor fusion.

  17. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    Science.gov (United States)

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  18. Effect of auditory training on the middle latency response in children with (central auditory processing disorder

    Directory of Open Access Journals (Sweden)

    E. Schochat

    2010-08-01

    Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.

  19. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  20. Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?

    Science.gov (United States)

    Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno

    2012-01-01

    The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…

  1. Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns.

    Directory of Open Access Journals (Sweden)

    Andres Carrasco

    Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.

  2. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note......Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine...... the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using...

  3. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  4. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  5. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  6. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  7. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323

  8. Temporal Processing Dysfunction in Schizophrenia

    Science.gov (United States)

    Carroll, Christine A.; Boggs, Jennifer; O'Donnell, Brian F.; Shekhar, Anantha; Hetrick, William P.

    2008-01-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the…

  9. Contact process with temporal disorder

    Science.gov (United States)

    Barghathi, Hatem; Vojta, Thomas; Hoyos, José A.

    2016-08-01

    We investigate the influence of time-varying environmental noise, i.e., temporal disorder, on the nonequilibrium phase transition of the contact process. Combining a real-time renormalization group, scaling theory, and large scale Monte-Carlo simulations in one and two dimensions, we show that the temporal disorder gives rise to an exotic critical point. At criticality, the effective noise amplitude diverges with increasing time scale, and the probability distribution of the density becomes infinitely broad, even on a logarithmic scale. Moreover, the average density and survival probability decay only logarithmically with time. This infinite-noise critical behavior can be understood as the temporal counterpart of infinite-randomness critical behavior in spatially disordered systems, but with exchanged roles of space and time. We also analyze the generality of our results, and we discuss potential experiments.

  10. Bilateral collicular interaction: modulation of auditory signal processing in frequency domain.

    Science.gov (United States)

    Cheng, L; Mei, H-X; Tang, J; Fu, Z-Y; Jen, P H-S; Chen, Q-C

    2013-04-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from a variety of lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in the modulation of frequency-domain signal processing of mice using electrophysiological recording and focal electrical stimulation. Focal electrical stimulation of neurons in one IC produces widespread inhibition and focused facilitation of responses of neurons in the other IC. This bilateral collicular interaction decreases the response magnitude and lengthens the response latency of inhibited IC neurons but produces an opposite effect on the response of facilitated IC neurons. In the frequency domain, the focal electrical stimulation of one IC sharpens or expands the frequency tuning curves (FTCs) of neurons in the other IC to improve frequency sensitivity and the frequency response range. The focal electrical stimulation also produces a shift in the best frequency (BF) of modulated IC (ICMdu) neurons toward that of electrically stimulated IC (ICES) neurons. The degree of bilateral collicular interaction is dependent upon the difference in the BF between the ICES neurons and ICMdu neurons. These data suggest that bilateral collicular interaction is a part of dynamic acoustic signal processing that adjusts and improves signal processing as well as reorganizes collicular representation of signal parameters according to the acoustic experience.

  11. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...

  12. Functional hemispheric specialization in processing phonemic and prosodic auditory changes in neonates

    Directory of Open Access Journals (Sweden)

    Takeshi eArimitsu

    2011-09-01

    Full Text Available This study focuses on the early cerebral base of speech perception by examining functional lateralization in neonates for processing segmental and suprasegmental features of speech. For this purpose, auditory evoked responses of full-term neonates to phonemic and prosodic contrasts were measured in their temporal area and part of the frontal and parietal areas using near-infrared spectroscopy (NIRS. Stimuli used here were phonemic contrast /itta/ and /itte/ and prosodic contrast of declarative and interrogative forms /itta/ and /itta?/. The results showed clear hemodynamic responses to both phonemic and prosodic changes in the temporal areas and part of the parietal and frontal regions. In particular, significantly higher hemoglobin (Hb changes were observed for the prosodic change in the right temporal area than for that in the left one, whereas Hb responses to the vowel change were similarly elicited in bilateral temporal areas. However, Hb responses to the vowel contrast were asymmetrical in the parietal area (around supra marginal gyrus, with stronger activation in the left. These results suggest a specialized function of the right hemisphere in prosody processing, which is already present in neonates. The parietal activities during phonemic processing were discussed in relation to verbal-auditory short-term memory. On the basis of this study and previous studies on older infants, the developmental process of functional lateralization from birth to 2 years of age for vowel and prosody was summarized.

  13. Binding ‘when’ and ‘where’ impairs temporal, but not spatial recall in auditory and visual working memory

    Directory of Open Access Journals (Sweden)

    Franco eDelogu

    2012-03-01

    Full Text Available Information about where and when events happened seem naturally linked to each other, but only few studies have investigated whether and how they are associated in working memory. We tested whether the location of items and their temporal order are jointly or independently encoded. We also verified if spatio-temporal binding is influenced by the sensory modality of items. Participants were requested to memorize the location and/or the serial order of five items (environmental sounds or pictures sequentially presented from five different locations. Next, they were asked to recall either the item location or their order of presentation within the sequence. Attention during encoding was manipulated by contrasting blocks of trials in which participants were requested to encode only one feature, to blocks of trials where they had to encode both features. Results show an interesting interaction between task and attention. Accuracy in the serial order recall was affected by the simultaneous encoding of item location, whereas the recall of item location was unaffected by the concurrent encoding of the serial order of items. This asymmetric influence of attention on the two tasks was similar for the auditory and visual modality. Together, these data indicate that item location is processed in a relatively automatic fashion, whereas maintaining serial order is more demanding in terms of attention. The remarkably analogous results for auditory and visual memory performance, suggest that the binding of serial order and location in working memory is not modality-dependent, and may involve common intersensory mechanisms.

  14. Processing of sounds by population spikes in a model of primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Alex Loebel

    2007-10-01

    Full Text Available We propose a model of the primary auditory cortex (A1, in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex.We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing.

  15. Processing of Uncertainty Temporal Relations

    Institute of Scientific and Technical Information of China (English)

    钟绍春; 刘大有

    1996-01-01

    Akind of classiication on temporal relations of propositions is presented.By introducing temporal approaching relation,a new temporal logic based on time-point and time-interval is proposed,which can describe uncertain temporal relations.Finally,some properties of temporal proposition under uncertain relations are proposed.

  16. Early neural disruption and auditory processing outcomes in rodent models: Implications for developmental language disability

    Directory of Open Access Journals (Sweden)

    Roslyn Holly Fitch

    2013-10-01

    Full Text Available Most researchers in the field of neural plasticity are familiar with the Kennard Principle," which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood. As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents aspects of human sensory processing that may correlate – both developmentally and functionally – with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic injuries (similar to those seen in premature infants and term infants with birth complications led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human "term," but only transient deficits (undetectable in adulthood when induced in a "preterm" window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations. Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in

  17. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  18. Contribution of Temporal Processing Skills to Reading Comprehension in 8-Year-Olds: Evidence for a Mediation Effect of Phonological Awareness

    Science.gov (United States)

    Malenfant, Nathalie; Grondin, Simon; Boivin, Michel; Forget-Dubois, Nadine; Robaey, Philippe; Dionne, Ginette

    2012-01-01

    This study tested whether the association between temporal processing (TP) and reading is mediated by phonological awareness (PA) in a normative sample of 615 eight-year-olds. TP was measured with auditory and bimodal (visual-auditory) temporal order judgment tasks and PA with a phoneme deletion task. PA partially mediated the association between…

  19. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution....... Throughout the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  20. ERPs reveal the temporal dynamics of auditory word recognition in specific language impairment.

    Science.gov (United States)

    Malins, Jeffrey G; Desroches, Amy S; Robertson, Erin K; Newman, Randy Lynn; Archibald, Lisa M D; Joanisse, Marc F

    2013-07-01

    We used event-related potentials (ERPs) to compare auditory word recognition in children with specific language impairment (SLI group; N=14) to a group of typically developing children (TD group; N=14). Subjects were presented with pictures of items and heard auditory words that either matched or mismatched the pictures. Mismatches overlapped expected words in word-onset (cohort mismatches; see: DOLL, hear: dog), rhyme (CONE -bone), or were unrelated (SHELL -mug). In match trials, the SLI group showed a different pattern of N100 responses to auditory stimuli compared to the TD group, indicative of early auditory processing differences in SLI. However, the phonological mapping negativity (PMN) response to mismatching items was comparable across groups, suggesting that just like TD children, children with SLI are capable of establishing phonological expectations and detecting violations of these expectations in an online fashion. Perhaps most importantly, we observed a lack of attenuation of the N400 for rhyming words in the SLI group, which suggests that either these children were not as sensitive to rhyme similarity as their typically developing peers, or did not suppress lexical alternatives to the same extent. These findings help shed light on the underlying deficits responsible for SLI.

  1. A neurophysiological deficit in early visual processing in schizophrenia patients with auditory hallucinations.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E; Kroppmann, Christopher J; Alschuler, Daniel M; Fekri, Shiva; Gil, Roberto; Jarskog, L Fredrik; Harkavy-Friedman, Jill M; Bruder, Gerard E

    2012-09-01

    Existing 67-channel event-related potentials, obtained during recognition and working memory paradigms with words or faces, were used to examine early visual processing in schizophrenia patients prone to auditory hallucinations (AH, n = 26) or not (NH, n = 49) and healthy controls (HC, n = 46). Current source density (CSD) transforms revealed distinct, strongly left- (words) or right-lateralized (faces; N170) inferior-temporal N1 sinks (150 ms) in each group. N1 was quantified by temporal PCA of peak-adjusted CSDs. For words and faces in both paradigms, N1 was substantially reduced in AH compared with NH and HC, who did not differ from each other. The difference in N1 between AH and NH was not due to overall symptom severity or performance accuracy, with both groups showing comparable memory deficits. Our findings extend prior reports of reduced auditory N1 in AH, suggesting a broader early perceptual integration deficit that is not limited to the auditory modality.

  2. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  3. Effects of sleep deprivation on central auditory processing

    OpenAIRE

    Liberalesso Paulo Breno; D’Andrea Karlin Fabianne; Cordeiro Mara L; Zeigelboim Bianca; Marques Jair; Jurkiewicz Ari

    2012-01-01

    AbstractBackgroundSleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP). Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse informat...

  4. Temporal information processing technology and its applications

    CERN Document Server

    Tang, Yong; Tang, Na

    2011-01-01

    Presenting a systematic introduction to temporal model and time calculation, this volume explores temporal information processing technology and its applications. Topics include the time model in terms of calculus and logic, temporal data models and database concepts, temporal query language, and more.

  5. Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

    Science.gov (United States)

    Ellis, Robert J.; Duan, Zhiyan; Wang, Ye

    2014-01-01

    “Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal stability of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications. PMID:25469636

  6. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  7. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  8. Visualization process of Temporal Data

    OpenAIRE

    Daassi, Chaouki; Nigay, Laurence; Fauvet, Marie-Christine

    2004-01-01

    International audience Temporal data are abundantly present in many application domains such as banking, financial, clinical, geographical applications and so on. Temporal data have been extensively studied from data mining and database perspectives. Complementary to these studies, our work focuses on the visualization techniques of temporal data: a wide range of visualization techniques have been designed to assist the users to visually analyze and manipulate temporal data. All the techni...

  9. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  10. Contribution of psychoacoustics and neuroaudiology in revealing correlation of mental disorders with central auditory processing disorders

    OpenAIRE

    Iliadou, V; Iakovides, S

    2003-01-01

    Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Desi...

  11. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation

    Science.gov (United States)

    Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  12. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation.

    Science.gov (United States)

    Hames, Elizabeth' C; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C; Baker, Mary; Zupancic, Stephen; O'Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  13. Implicit learning of between-group intervals in auditory temporal structures.

    Science.gov (United States)

    Terry, J; Stevens, C J; Weidemann, G; Tillmann, B

    2016-08-01

    Implicit learning of temporal structure has primarily been reported when events within a sequence (e.g., visual-spatial locations, tones) are systematically ordered and correlated with the temporal structure. An auditory serial reaction time task was used to investigate implicit learning of temporal intervals between pseudorandomly ordered syllables. Over exposure, participants identified syllables presented in sequences with weakly metrical temporal structures. In a test block, the temporal structure differed from exposure only in the duration of the interonset intervals (IOIs) between groups. It was hypothesized that reaction time (RT) to syllables following between-group IOIs would decrease with exposure and increase at test. In Experiments 1 and 2, the sequences presented over exposure and test were counterbalanced across participants (Pattern 1 and Pattern 2 conditions). An RT increase at test to syllables following between-group IOIs was only evident in the condition that presented an exposure structure with a slightly stronger meter (Pattern 1 condition). The Pattern 1 condition also elicited a global expectancy effect: Test block RT slowed to earlier-than-expected syllables (i.e., syllables shifted to an earlier beat) but not to later-than-expected syllables. Learning of between-group IOIs and the global expectancy effect extended to the Pattern 2 condition when meter was strengthened with an external pulse (Experiment 2). Experiment 3 further demonstrated implicit learning of a new weakly metrical structure with only earlier-than-expected violations at test. Overall findings demonstrate learning of weakly metrical rhythms without correlated event structures (i.e., sequential syllable orders). They further suggest the presence of a global expectancy effect mediated by metrical strength. PMID:27301354

  14. Bilateral Collicular Interaction: Modulation of Auditory Signal Processing in Amplitude Domain

    Science.gov (United States)

    Fu, Zi-Ying; Wang, Xin; Jen, Philip H.-S.; Chen, Qi-Cai

    2012-01-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from many lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and from the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in modulating amplitude-domain signal processing using electrophysiological recording, acoustic and focal electrical stimulation. Focal electrical stimulation of one (ipsilateral) IC produces widespread inhibition (61.6%) and focused facilitation (9.1%) of responses of neurons in the other (contralateral) IC, while 29.3% of the neurons were not affected. Bilateral collicular interaction produces a decrease in the response magnitude and an increase in the response latency of inhibited IC neurons but produces opposite effects on the response of facilitated IC neurons. These two groups of neurons are not separately located and are tonotopically organized within the IC. The modulation effect is most effective at low sound level and is dependent upon the interval between the acoustic and electric stimuli. The focal electrical stimulation of the ipsilateral IC compresses or expands the rate-level functions of contralateral IC neurons. The focal electrical stimulation also produces a shift in the minimum threshold and dynamic range of contralateral IC neurons for as long as 150 minutes. The degree of bilateral collicular interaction is dependent upon the difference in the best frequency between the electrically stimulated IC neurons and modulated IC neurons. These data suggest that bilateral collicular interaction mainly changes the ratio between excitation and inhibition during signal processing so as to sharpen the amplitude sensitivity of IC neurons. Bilateral interaction may be also involved in acoustic

  15. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf;

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  16. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  17. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  18. The role of the locus coeruleus-noradrenaline system in temporal attention and uncertainty processing

    OpenAIRE

    Stephen B.R.E. Brown

    2015-01-01

    This dissertation explores the involvement of the locus-coeruleus-noradrenaline (LC-NE) system in both temporal attention and uncertainty processing. To this end, a number of cognitive tasks are used (Stroop, passive viewing, attentional blink, accessory stimulus, auditory oddball) and a number of techniques are utilized (electroencephalogram [EEG], pupillometry, phsychopharmacology).

  19. Auditory processing in dysphonic children Processamento auditivo em crianças disfônicas

    Directory of Open Access Journals (Sweden)

    Mirian Aratangy Arnaut

    2011-06-01

    Full Text Available Contemporary cross-sectional cohort study. There is evidence of the auditory perception influence on the development of oral and written language, as well as on the self-perception of vocal conditions. The auditory system maturation can impact on this process. OBJECTIVE: To characterize the auditory skills of temporal ordering and localization in dysphonic children. MATERIALS AND METHODS: We assessed 42 children (4 to 8 years. Study group: 31 dysphonic children; Comparison group: 11 children without vocal change complaints. They all had normal auditory thresholds and also normal cochleo-eyelid reflexes. They were submitted to a Simplified assessment of the auditory process (Pereira, 1993. In order to compare the groups, we used the Mann-Whitney and Kruskal-Wallis statistical tests. Level of significance: 0.05 (5%. RESULTS: Upon simplified assessment, 100% of the Control Group and 61.29% of the Study Group had normal results. The groups were similar in the localization and verbal sequential memory tests. The nonverbal sequential memory showed worse results on dysphonic children. In this group, the performance was worse among the four to six years. CONCLUSION: The dysphonic children showed changes on the localization or temporal ordering skills, the skill of non-verbal temporal ordering differentiated the dysphonic group. In this group, the Sound Location improved with age.Estudo de coorte contemporânea com corte transversal. Há evidências da influência da percepção auditiva sobre o desenvolvimento da linguagem oral e escrita e da autopercepção das condições vocais. A maturação do sistema auditivo pode interferir nesse processo. OBJETIVO: Caracterizar habilidades auditivas de Localização e de Ordenação Temporal em crianças disfônicas. MATERIAL E MÉTODO: Avaliaram-se 42 crianças (4 a 8 anos. Grupo Pesquisa: 31 crianças disfônicas, Grupo de Comparação: 11 crianças sem queixas de alterações vocais. Todas apresentaram

  20. Cortical substrates and functional correlates of auditory deviance processing deficits in schizophrenia

    Directory of Open Access Journals (Sweden)

    Anthony J. Rissling

    2014-01-01

    Full Text Available Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ patients, scalp-channel measures of averaged event-related potentials (ERPs mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42 and non-psychiatric comparison subjects (n = 47 participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard tone ERP difference complex, here termed the auditory deviance response (ADR, comprised of a mid-frontal mismatch negativity (MMN, P3a positivity, and re-orienting negativity (RON peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics.

  1. Accounting for the phenomenology and varieties of auditory verbal hallucination within a predictive processing framework.

    OpenAIRE

    Wilkinson, S

    2014-01-01

    Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH,...

  2. Periodicity extraction in the anuran auditory nerve. II: Phase and temporal fine structure.

    Science.gov (United States)

    Simmons, A M; Reese, G; Ferragamo, M

    1993-06-01

    phase locking to simple sinusoids. Increasing stimulus intensity also shifts the synchronized responses of some fibers away from the fundamental frequency to one of the low-frequency harmonics in the stimuli. These data suggest that the synchronized firing of bullfrog eighth nerve fibers operates to extract the waveform periodicity of complex, multiple-harmonic stimuli, and this periodicity extraction is influenced by the phase spectrum and temporal fine structure of the stimuli. The similarity in response patterns of amphibian papilla and basilar papilla fibers argues that the frog auditory system employs primarily a temporal mechanism for extraction of first harmonic periodicity.

  3. Auditory evoked potentials to spectro-temporal modulation of complex tones in normal subjects and patients with severe brain injury.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L; Stokes, M; Munday, R; Haque, N

    2000-05-01

    In order to assess higher auditory processing capabilities, long-latency auditory evoked potentials (AEPs) were recorded to synthesized musical instrument tones in 22 post-comatose patients with severe brain injury causing variably attenuated behavioural responsiveness. On the basis of normative studies, three different types of spectro-temporal modulation were employed. When a continuous 'clarinet' tone changes pitch once every few seconds, N1/P2 potentials are evoked at latencies of approximately 90 and 180 ms, respectively. Their distribution in the fronto-central region is consistent with generators in the supratemporal cortex of both hemispheres. When the pitch is modulated at a much faster rate ( approximately 16 changes/s), responses to each change are virtually abolished but potentials with similar distribution are still elicited by changing the timbre (e.g. 'clarinet' to 'oboe') every few seconds. These responses appear to represent the cortical processes concerned with spectral pattern analysis and the grouping of frequency components to form sound 'objects'. Following a period of 16/s oscillation between two pitches, a more anteriorly distributed negativity is evoked on resumption of a steady pitch. Various lines of evidence suggest that this is probably equivalent to the 'mismatch negativity' (MMN), reflecting a pre-perceptual, memory-based process for detection of change in spectro-temporal sound patterns. This method requires no off-line subtraction of AEPs evoked by the onset of a tone, and the MMN is produced rapidly and robustly with considerably larger amplitude (usually >5 microV) than that to discontinuous pure tones. In the brain-injured patients, the presence of AEPs to two or more complex tone stimuli (in the combined assessment of two authors who were 'blind' to the clinical and behavioural data) was significantly associated with the demonstrable possession of discriminative hearing (the ability to respond differentially to verbal commands

  4. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...... filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high...

  5. Suprathreshold auditory processing deficits in noise: Effects of hearing loss and age.

    Science.gov (United States)

    Kortlang, Steffen; Mauermann, Manfred; Ewert, Stephan D

    2016-01-01

    People with sensorineural hearing loss generally suffer from a reduced ability to understand speech in complex acoustic listening situations, particularly when background noise is present. In addition to the loss of audibility, a mixture of suprathreshold processing deficits is possibly involved, like altered basilar membrane compression and related changes, as well as a reduced ability of temporal coding. A series of 6 monaural psychoacoustic experiments at 0.5, 2, and 6 kHz was conducted with 18 subjects, divided equally into groups of young normal-hearing, older normal-hearing and older hearing-impaired listeners, aiming at disentangling the effects of age and hearing loss on psychoacoustic performance in noise. Random frequency modulation detection thresholds (RFMDTs) with a low-rate modulator in wide-band noise, and discrimination of a phase-jittered Schroeder-phase from a random-phase harmonic tone complex are suggested to characterize the individual ability of temporal processing. The outcome was compared to thresholds of pure tones and narrow-band noise, loudness growth functions, auditory filter bandwidths, and tone-in-noise detection thresholds. At 500 Hz, results suggest a contribution of temporal fine structure (TFS) to pure-tone detection thresholds. Significant correlation with auditory thresholds and filter bandwidths indicated an impact of frequency selectivity on TFS usability in wide-band noise. When controlling for the effect of threshold sensitivity, the listener's age significantly correlated with tone-in-noise detection and RFMDTs in noise at 500 Hz, showing that older listeners were particularly affected by background noise at low carrier frequencies.

  6. Neurophysiological Mechanisms of Auditory Information Processing in Adolescence: A Study on Sex Differences.

    Science.gov (United States)

    Bakos, Sarolta; Töllner, Thomas; Trinkl, Monika; Landes, Iris; Bartling, Jürgen; Grossheinrich, Nicola; Schulte-Körne, Gerd; Greimel, Ellen

    2016-04-01

    To date, little is known about sex differences in the neurophysiological correlates underlying auditory information processing. In the present study, auditory evoked potentials were evoked in typically developing male (n = 15) and female (n = 14) adolescents (13-18 years) during an auditory oddball task. Girls compared to boys displayed lower N100 and P300 amplitudes to targets. Larger N100 amplitudes in adolescent boys might indicate higher neural sensitivity to changes of incoming auditory information. The P300 findings point toward sex differences in auditory working memory and might suggest that adolescent boys might allocate more attentional resources when processing relevant auditory stimuli than adolescent girls. PMID:27379950

  7. Pediatric central auditory processing disorder showing elevated threshold on pure tone audiogram.

    Science.gov (United States)

    Maeda, Yukihide; Nakagawa, Atsuko; Nagayasu, Rie; Sugaya, Akiko; Omichi, Ryotaro; Kariya, Shin; Fukushima, Kunihiro; Nishizaki, Kazunori

    2016-10-01

    Central auditory processing disorder (CAPD) is a condition in which dysfunction in the central auditory system causes difficulty in listening to conversations, particularly under noisy conditions, despite normal peripheral auditory function. Central auditory testing is generally performed in patients with normal hearing on the pure tone audiogram (PTA). This report shows that diagnosis of CAPD is possible even in the presence of an elevated threshold on the PTA, provided that the normal function of the peripheral auditory pathway was verified by distortion product otoacoustic emission (DPOAE), auditory brainstem response (ABR), and auditory steady state response (ASSR). Three pediatric cases (9- and 10-year-old girls and an 8-year-old boy) of CAPD with elevated thresholds on PTAs are presented. The chief complaint was difficulty in listening to conversations. PTA showed elevated thresholds, but the responses and thresholds for DPOAE, ABR, and ASSR were normal, showing that peripheral auditory function was normal. Significant findings of central auditory testing such as dichotic speech tests, time compression of speech signals, and binaural interaction tests confirmed the diagnosis of CAPD. These threshold shifts in PTA may provide a new concept of a clinical symptom due to central auditory dysfunction in CAPD. PMID:26922127

  8. Auditory processing in patients with Charcot-Marie-Tooth disease type 1A.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Beynon, A.J.; Snik, A.F.M.; Engelen, B.G.M. van; Broek, P. van den

    2003-01-01

    HYPOTHESIS: It is unclear whether Charcot-Marie-Tooth (CMT) disease, type 1A, causes auditory processing disorders. Therefore, auditory processing abilities were investigated in five CMT1A patients with normal hearing. BACKGROUND: Previous studies have failed to separate peripheral from central audi

  9. Encoding of sound localization cues by an identified auditory interneuron: effects of stimulus temporal pattern.

    Science.gov (United States)

    Samson, Annie-Hélène; Pollack, Gerald S

    2002-11-01

    An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for

  10. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    OpenAIRE

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this...

  11. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  12. The impact of educational level on performance on auditory processing tests

    Directory of Open Access Journals (Sweden)

    Cristina F.B. Murphy

    2016-03-01

    Full Text Available Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor years of schooling was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  13. Shared and Divergent Auditory and Tactile Processing in Children with Autism and Children with Sensory Processing Dysfunction Relative to Typically Developing Peers.

    Science.gov (United States)

    Demopoulos, Carly; Brandes-Aitken, Annie N; Desai, Shivani S; Hill, Susanna S; Antovich, Ashley D; Harris, Julia; Marco, Elysa J

    2015-07-01

    The aim of this study was to compare sensory processing in typically developing children (TDC), children with Autism Spectrum Disorder (ASD), and those with sensory processing dysfunction (SPD) in the absence of an ASD. Performance-based measures of auditory and tactile processing were compared between male children ages 8-12 years assigned to an ASD (N=20), SPD (N=15), or TDC group (N=19). Both the SPD and ASD groups were impaired relative to the TDC group on a performance-based measure of tactile processing (right-handed graphesthesia). In contrast, only the ASD group showed significant impairment on an auditory processing index assessing dichotic listening, temporal patterning, and auditory discrimination. Furthermore, this impaired auditory processing was associated with parent-rated communication skills for both the ASD group and the combined study sample. No significant group differences were detected on measures of left-handed graphesthesia, tactile sensitivity, or form discrimination; however, more participants in the SPD group demonstrated a higher tactile detection threshold (60%) compared to the TDC (26.7%) and ASD groups (35%). This study provides support for use of performance-based measures in the assessment of children with ASD and SPD and highlights the need to better understand how sensory processing affects the higher order cognitive abilities associated with ASD, such as verbal and non-verbal communication, regardless of diagnostic classification.

  14. Prenatal IV Cocaine: Alterations in Auditory Information Processing

    Directory of Open Access Journals (Sweden)

    Charles F. Mactutus

    2011-06-01

    Full Text Available One clue regarding the basis of cocaine-induced deficits in attentional processing is provided by the clinical findings of changes in the infants’ startle response; observations buttressed by neurophysiological evidence of alterations in brainstem transmission time. Using the IV route of administration and doses that mimic the peak arterial levels of cocaine use in humans, the present study examined the effects of prenatal cocaine on auditory information processing via tests of the acoustic startle response (ASR, habituation, and prepulse inhibition (PPI in the offspring. Nulliparous Long-Evans female rats, implanted with an IV access port prior to breeding, were administered saline, 0.5, 1.0, or 3.0 mg/kg/injection of cocaine HCL (COC from gestation day (GD8-20 (1x/day-GD8-14, 2x/day-GD15-20. COC had no significant effects on maternal/litter parameters or growth of the offspring. At 18-20 days of age, one male and one female, randomly selected from each litter displayed an increased ASR (>30% for males at 1.0 mg/kg and >30% for females at 3.0 mg/kg. When reassessed in adulthood (D90-100, a linear dose-response increase was noted on response amplitude. At both test ages, within-session habituation was retarded by prenatal cocaine treatment. Testing the females in diestrus vs. estrus did not alter the results. Prenatal cocaine altered the PPI response function across interstimulus interval (ISI and induced significant sex-dependent changes in response latency. Idazoxan, an alpha2-adrenergic receptor antagonist, significantly enhanced the ASR, but less enhancement was noted with increasing doses of prenatal cocaine. Thus, in utero exposure to cocaine, when delivered via a protocol designed to capture prominent features of recreational usage, causes persistent, if not permanent, alterations in auditory information processing, and suggests dysfunction of the central noradrenergic circuitry modulating, if not mediating, these responses.

  15. A Phenomenological Model of the Electrically Stimulated Auditory Nerve Fiber: Temporal and Biphasic Response Properties.

    Science.gov (United States)

    Horne, Colin D F; Sumner, Christian J; Seeber, Bernhard U

    2016-01-01

    We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.

  16. Grasping the sound: Auditory pitch influences size processing in motor planning.

    Science.gov (United States)

    Rinaldi, Luca; Lega, Carlotta; Cattaneo, Zaira; Girelli, Luisa; Bernardi, Nicolò Francesco

    2016-01-01

    Growing evidence shows that individuals consistently match auditory pitch with visual size. For instance, high-pitched sounds are perceptually associated with smaller visual stimuli, whereas low-pitched sounds with larger ones. The present study explores whether this crossmodal correspondence, reported so far for perceptual processing, also modulates motor planning. To address this issue, we carried out a series of kinematic experiments to verify whether actions implying size processing are affected by auditory pitch. Experiment 1 showed that grasping movements toward small/large objects were initiated faster in response to high/low pitches, respectively, thus extending previous findings in the literature to more complex motor behavior. Importantly, auditory pitch influenced the relative scaling of the hand preshaping, with high pitches associated with smaller grip aperture compared with low pitches. Notably, no effect of auditory pitch was found in case of pointing movements (no grasp implied, Experiment 2), as well as when auditory pitch was irrelevant to the programming of the grip aperture, that is, in case of grasping an object of uniform size (Experiment 3). Finally, auditory pitch influenced also symbolic manual gestures expressing "small" and "large" concepts (Experiment 4). In sum, our results are novel in revealing the impact of auditory pitch on motor planning when size processing is required, and shed light on the role of auditory information in driving actions. (PsycINFO Database Record PMID:26280267

  17. Brian hears: online auditory processing using vectorisation over channels

    Directory of Open Access Journals (Sweden)

    Bertrand eFontaine

    2011-07-01

    Full Text Available The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorising computation over frequency channels, which are implemented in Brian Hears, a library for the spiking neural network simulator package Brian. This approach allows us to use high-level programming languages such as Python, as the cost of interpretation becomes negligible. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelised using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.

  18. Locating Melody Processing Activity in Auditory Cortex with Magnetoencephalography.

    Science.gov (United States)

    Patterson, Roy D; Andermann, Martin; Uppenkamp, Stefan; Rupp, André

    2016-01-01

    This paper describes a technique for isolating the brain activity associated with melodic pitch processing. The magnetoencephalograhic (MEG) response to a four note, diatonic melody built of French horn notes, is contrasted with the response to a control sequence containing four identical, "tonic" notes. The transient response (TR) to the first note of each bar is dominated by energy-onset activity; the melody processing is observed by contrasting the TRs to the remaining melodic and tonic notes of the bar (2-4). They have uniform shape within a tonic or melodic sequence which makes it possible to fit a 4-dipole model and show that there are two sources in each hemisphere--a melody source in the anterior part of Heschl's gyrus (HG) and an onset source about 10 mm posterior to it, in planum temporale (PT). The N1m to the initial note has a short latency and the same magnitude for the tonic and the melodic sequences. The melody activity is distinguished by the relative sizes of the N1m and P2m components of the TRs to notes 2-4. In the anterior source a given note elicits a much larger N1m-P2m complex with a shorter latency when it is part of a melodic sequence. This study shows how to isolate the N1m, energy-onset response in PT, and produce a clean melody response in the anterior part of auditory cortex (HG).

  19. Postnatal development of synaptic properties of the GABAergic projection from the inferior colliculus to the auditory thalamus

    OpenAIRE

    Venkataraman, Yamini; Bartlett, Edward L.

    2013-01-01

    The development of auditory temporal processing is important for processing complex sounds as well as for acquiring reading and language skills. Neuronal properties and sound processing change dramatically in auditory cortex neurons after the onset of hearing. However, the development of the auditory thalamus or medial geniculate body (MGB) has not been well studied over this critical time window. Since synaptic inhibition has been shown to be crucial for auditory temporal processing, this st...

  20. Developmental differences in visual and auditory processing of complex sentences.

    Science.gov (United States)

    Booth, J R; MacWhinney, B; Harasaki, Y

    2000-01-01

    Children aged 8 through 11 (N = 250) were given a word-by-word sentence task in both the visual and auditory modes. The sentences included an object relative clause, a subject relative clause, or a conjoined verb phrase. Each sentence was followed by a true-false question, testing the subject of either the first or second verb. Participants were also given two memory span measures: digit span and reading span. High digit span children slowed down more at the transition from the main to the relative clause than did the low digit span children. The findings suggest the presence of a U-shaped learning pattern for on-line processing of restrictive relative clauses. Off-line accuracy scores showed different patterns for good comprehenders and poor comprehenders. Poor comprehenders answered the second verb questions at levels that were consistently below chance. Their answers were based on an incorrect local attachment strategy that treated the second noun as the subject of the second verb. For example, they often answered yes to the question "The girl chases the policeman" after the object relative sentence "The boy that the girl sees chases the policeman." Interestingly, low memory span poor comprehenders used the local attachment strategy less consistently than high memory span poor comprehenders, and all poor comprehenders used this strategy less consistently for harder than for easier sentences. PMID:11016560

  1. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  2. Accounting for the phenomenology and varieties of auditory verbal hallucination within a predictive processing framework.

    Science.gov (United States)

    Wilkinson, Sam

    2014-11-01

    Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for. PMID:25286243

  3. Psychometric profile of children with auditory processing disorder and children with dyslexia.

    OpenAIRE

    DAWES, P; Bishop, DV

    2010-01-01

    OBJECTIVE: The aim was to address the controversy that exists over the extent to which auditory processing disorder (APD) is a separate diagnostic category with a distinctive psychometric profile, rather than a reflection of a more general learning disability. METHODS: Children with an APD diagnosis (N=25) were compared with children with dyslexia (N=19) on a battery of standardised auditory processing, language, literacy and non-verbal intelligence quotient measures as well as parental repor...

  4. Effects of sleep deprivation on central auditory processing

    Directory of Open Access Journals (Sweden)

    Liberalesso Paulo Breno

    2012-07-01

    Full Text Available Abstract Background Sleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP. Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse information. In the present study, thirty healthy adult volunteers (17 females and 13 males, aged 30.75 ± 7.14 years were subjected to a pure tone audiometry test, a speech recognition threshold test, a speech recognition task, the Staggered Spondaic Word Test (SSWT, and the Random Gap Detection Test (RGDT. Baseline (BSL performance was compared to performance after 24 hours of being sleep deprived (24hSD using the Student’s t test. Results Mean RGDT score was elevated in the 24hSD condition (8.0 ± 2.9 ms relative to the BSL condition for the whole cohort (6.4 ± 2.8 ms; p = 0.0005, for males (p = 0.0066, and for females (p = 0.0208. Sleep deprivation reduced SSWT scores for the whole cohort in both ears [(right: BSL, 98.4 % ± 1.8 % vs. SD, 94.2 % ± 6.3 %. p = 0.0005(left: BSL, 96.7 % ± 3.1 % vs. SD, 92.1 % ± 6.1 %, p  Conclusion Sleep deprivation impairs RGDT and SSWT performance. These findings confirm that sleep deprivation has central effects that may impair performance in other areas of life.

  5. Interference by Process, Not Content, Determines Semantic Auditory Distraction

    Science.gov (United States)

    Marsh, John E.; Hughes, Robert W.; Jones, Dylan M.

    2009-01-01

    Distraction by irrelevant background sound of visually-based cognitive tasks illustrates the vulnerability of attentional selectivity across modalities. Four experiments centred on auditory distraction during tests of memory for visually-presented semantic information. Meaningful irrelevant speech disrupted the free recall of semantic…

  6. Tuned with a Tune: Talker Normalization via General Auditory Processes.

    Science.gov (United States)

    Laing, Erika J C; Liu, Ran; Lotto, Andrew J; Holt, Lori L

    2012-01-01

    Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker's speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS) of a talker's speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences' LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by non-speech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization. PMID:22737140

  7. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  8. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  9. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism:An EEG Power and BOLD fMRI Investigation

    Directory of Open Access Journals (Sweden)

    Elizabeth C Hames

    2016-04-01

    Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2­VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.

  10. Psychophysical Estimates of Frequency Discrimination: More than Just Limitations of Auditory Processing

    OpenAIRE

    Beate Sabisch; Benjamin Weiss; Barry, Johanna G.

    2013-01-01

    Efficient auditory processing is hypothesized to support language and literacy development. However, behavioral tasks used to assess this hypothesis need to be robust to non-auditory specific individual differences. This study compared frequency discrimination abilities in a heterogeneous sample of adults using two different psychoacoustic task designs, referred to here as: 2I_6A_X and 3I_2AFC designs. The role of individual differences in nonverbal IQ (NVIQ), socioeconomic status (SES) and m...

  11. Hierarchical and serial processing in the spatial auditory cortical pathway is degraded by natural aging

    OpenAIRE

    Juarez-Salinas, Dina L.; Engle, James R.; Navarro, Xochi O.; Gregg H Recanzone

    2010-01-01

    The compromised abilities to localize sounds and to understand speech are two hallmark deficits in aged individuals. The auditory cortex is necessary for these processes, yet we know little about how normal aging affects these early cortical fields. In this study, we recorded the spatial tuning of single neurons in primary (area A1) and secondary (area CL) auditory cortical areas in young and aged alert rhesus macaques. We found that the neurons of aged animals had greater spontaneous and dri...

  12. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients.

    Science.gov (United States)

    Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier

    2016-10-01

    Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. PMID:27622640

  13. Interference by process, not content, determines semantic auditory distraction

    OpenAIRE

    Marsh, J. E.; Hughes, Rob; Jones, D M

    2009-01-01

    Distraction by irrelevant background sound of visually-based cognitive tasks illustrates the vulnerability of attentional selectivity across modalities. Four experiments centred on auditory distraction during tests of memory for visually-presented semantic information. Meaningful irrelevant speech disrupted the free recall of semantic category-exemplars more than meaningless irrelevant sound (Experiment 1). This effect was exacerbated when the irrelevant speech was semantically related to the...

  14. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    Lyon RF

    2007-01-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as "design curves" for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a "missing link" between physiological, electrical, and mechanical models for auditory filtering.

  15. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    R. F. Lyon

    2007-12-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as “design curves” for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a “missing link” between physiological, electrical, and mechanical models for auditory filtering.

  16. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception.

    Science.gov (United States)

    Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger

    2016-05-01

    A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions. PMID:27250164

  17. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.;

    2013-01-01

    human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...... vocalization, compared with during passive listening. One network of regions appears to encode an “error signal” regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across...... presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple...

  18. Auditory pre-attentive processing of Chinese tones

    Institute of Scientific and Technical Information of China (English)

    YANG Li-jun; CAO Ke-li; WEI Chao-gang; LIU Yong-zhi

    2008-01-01

    Background Chinese tones are considered important in Chinese discrimination.However,the relevant reports on auditory central mechanisms concerning Chinese tones are limited.In this study,mismatch negativity (MMN),one of the event related potentials (ERP),was used to investigate pre-attentive processing of Chinese tones,and the differences between the function of oddball MMN and that of control MMN are discussed.Methods Ten subjects (six men and four women) with normal hearing participated in the study.A sequence was presented to these subjects through a loudspeaker,the sequence included four blocks,a control block and three oddball blocks.The control block was made up of five components (one pure tone and four Chinese tones) with equiprobability.The oddball blocks were made up of two components,one was a standard stimulus (tone 1) and the other was a deviant stimulus (tone 2 or tone 3 or tone 4).Electroencephalogram (EEG) data were recorded when the sequence was presented and MMNs were obtained from the analysis of the EEG data.Results Two kinds of MMNs were obtained,oddball MMN and control MMN.Oddball MMN was obtained by subtracting the ERP elicited by standard stimulation (tone 1) from that elicited by deviant stimulation (tone 2 or tone 3 or tone 4) in the oddball block; control MMN was obtained by subtracting the ERP elicited by the tone in control block,which was the same tone as the deviant stimulation in the oddball block,from the ERP elicited by deviant stimulation (tone 2 or tone 3 or tone 4)in the oddball block.There were two negative waves in oddball MMN,one appeared around 150 ms (oddball MMN 1),the other around 300 ms (oddball MMN 2).Only one negative wave appeared around 300 ms in control MMN,which was corresponding to the oddball MMN 2.We performed the statistical analyses in each paradigm for latencies and amplitudes for oddball MMN 2 in discriminating the three Chinese tones and reported no significant differences.But the latencies and amplitudes

  19. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  20. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    Science.gov (United States)

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26707975

  1. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention

    Science.gov (United States)

    Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-01-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  2. Phonological working memory and auditory processing speed in children with specific language impairment

    Directory of Open Access Journals (Sweden)

    Fatemeh Haresabadi

    2015-02-01

    Full Text Available Background and Aim: Specific language impairment (SLI, one variety of developmental language disorder, has attracted much interest in recent decades. Much research has been conducted to discover why some children have a specific language impairment. So far, research has failed to identify a reason for this linguistic deficiency. Some researchers believe language disorder causes defects in phonological working memory and affects auditory processing speed. Therefore, this study reviews the results of research investigating these two factors in children with specific language impairment.Recent Findings: Studies have shown that children with specific language impairment face constraints in phonological working memory capacity. Memory deficit is one possible cause of linguistic disorder in children with specific language impairment. However, in these children, disorder in information processing speed is observed, especially regarding the auditory aspect.Conclusion: Much more research is required to adequately explain the relationship between phonological working memory and auditory processing speed with language. However, given the role of phonological working memory and auditory processing speed in language acquisition, a focus should be placed on phonological working memory capacity and auditory processing speed in the assessment and treatment of children with a specific language impairment.

  3. Response to own name in children: ERP study of auditory social information processing.

    Science.gov (United States)

    Key, Alexandra P; Jones, Dorita; Peters, Sarika U

    2016-09-01

    Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses.

  4. Cerebral processing of auditory stimuli in patients with irritable bowel syndrome

    Institute of Scientific and Technical Information of China (English)

    Viola Andresen; Peter Kobelt; Claus Zimmer; Bertram Wiedenmann; Burghard F Klapp; Hubert Monnikes; Alexander Poellinger; Chedwa Tsrouya; Dominik Bach; Albrecht Stroh; Annette Foerschler; Petra Georgiewa; Marco Schmidtmann; Ivo R van der Voort

    2006-01-01

    AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms,we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality.METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans.Statistical maps were constructed using the general linear model.RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds.To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations.CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affectire reactivity, possibly associated with the psychological comorbidity often found in IBS patients.

  5. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Tollkötter Melanie

    2006-08-01

    Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.

  6. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  7. Early auditory processing in musicians and dancers during a contemporary dance piece

    Science.gov (United States)

    Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari

    2016-01-01

    The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929

  8. Early auditory processing in musicians and dancers during a contemporary dance piece.

    Science.gov (United States)

    Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari

    2016-01-01

    The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929

  9. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  10. The Role of the Auditory Brainstem in Processing Linguistically-Relevant Pitch Patterns

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2009-01-01

    Historically, the brainstem has been neglected as a part of the brain involved in language processing. We review recent evidence of language-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem. We argue that there is enhancing…

  11. Basic auditory processing is related to familial risk, not to reading fluency : An ERP study

    NARCIS (Netherlands)

    Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L.

    2015-01-01

    Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART),

  12. Basic auditory processing is related to familial risk, not to reading fluency: An ERP study

    NARCIS (Netherlands)

    B. Hakvoort; A. van der Leij; N. Maurits; B. Maassen; T.L. van Zuijen

    2014-01-01

    Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART),

  13. Auditory Processing Disorder in Children with Reading Disabilities: Effect of Audiovisual Training

    Science.gov (United States)

    Veuillet, Evelyne; Magnan, Annie; Ecalle, Jean; Thai-Van, Hung; Collet, Lionel

    2007-01-01

    Reading disability is associated with phonological problems which might originate in auditory processing disorders. The aim of the present study was 2-fold: first, the perceptual skills of average-reading children and children with dyslexia were compared in a categorical perception task assessing the processing of a phonemic contrast based on…

  14. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody.

  15. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. PMID:27519239

  16. Age-related dissociation of sensory and decision-based auditory motion processing

    Directory of Open Access Journals (Sweden)

    Alexandra Annemarie Ludwig

    2012-03-01

    Full Text Available Studies on the maturation of auditory motion processing in children have yielded inconsistent reports. The present study combines subjective and objective measurements to investigate how the auditory perceptual abilities of children change during development and whether these changes are paralleled by changes in the event-related brain potential (ERP.We employed the mismatch negativity (MMN to determine maturational changes in the discrimination of interaural time differences (ITD that generate lateralized moving auditory percepts. MMNs were elicited in children, teenagers, and adults, using a small and a large ITD at stimulus offset with respect to each subject’s discrimination threshold. In adults and teenagers large deviants elicited prominent MMNs, whereas small deviants at the behavioral threshold elicited only a marginal or no MMN. In contrast, pronounced MMNs for both deviant sizes were found in children. Behaviourally, however, most of the children showed higher discrimination thresholds than teens and adults.Although automatic ITD detection is functional, active discrimination is still limited in children. The lack of MMN deviance dependency in children suggests that unlike in teenagers and adults, neural signatures of automatic auditory motion processing do not mirror discrimination abilities.The study critically accounts for advanced understanding of children’s central auditory development.

  17. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    Science.gov (United States)

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  18. The role of the auditory brainstem in processing musically relevant pitch.

    Science.gov (United States)

    Bidelman, Gavin M

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294

  19. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  20. Stimulus intensity modulates multisensory temporal processing.

    Science.gov (United States)

    Krueger Fister, Juliane; Stevenson, Ryan A; Nidiffer, Aaron R; Barnett, Zachary P; Wallace, Mark T

    2016-07-29

    One of the more challenging feats that multisensory systems must perform is to determine which sensory signals originate from the same external event, and thus should be integrated or "bound" into a singular perceptual object or event, and which signals should be segregated. Two important stimulus properties impacting this process are the timing and effectiveness of the paired stimuli. It has been well established that the more temporally aligned two stimuli are, the greater the degree to which they influence one another's processing. In addition, the less effective the individual unisensory stimuli are in eliciting a response, the greater the benefit when they are combined. However, the interaction between stimulus timing and stimulus effectiveness in driving multisensory-mediated behaviors has never been explored - which was the purpose of the current study. Participants were presented with either high- or low-intensity audiovisual stimuli in which stimulus onset asynchronies (SOAs) were parametrically varied, and were asked to report on the perceived synchrony/asynchrony of the paired stimuli. Our results revealed an interaction between the temporal relationship (SOA) and intensity of the stimuli. Specifically, individuals were more tolerant of larger temporal offsets (i.e., more likely to call them synchronous) when the paired stimuli were less effective. This interaction was also seen in response time (RT) distributions. Behavioral gains in RTs were seen with synchronous relative to asynchronous presentations, but this effect was more pronounced with high-intensity stimuli. These data suggest that stimulus effectiveness plays an underappreciated role in the perception of the timing of multisensory events, and reinforces the interdependency of the principles of multisensory integration in determining behavior and shaping perception. PMID:26920937

  1. Temporal resolution of the Florida manatee (Trichechus manatus latirostris) auditory system.

    Science.gov (United States)

    Mann, David A; Colbert, Debborah E; Gaspard, Joseph C; Casper, Brandon M; Cook, Mandy L H; Reep, Roger L; Bauer, Gordon B

    2005-10-01

    Auditory evoked potential (AEP) measurements of two Florida manatees (Trichechus manatus latirostris) were measured in response to amplitude modulated tones. The AEP measurements showed weak responses to test stimuli from 4 kHz to 40 kHz. The manatee modulation rate transfer function (MRTF) is maximally sensitive to 150 and 600 Hz amplitude modulation (AM) rates. The 600 Hz AM rate is midway between the AM sensitivities of terrestrial mammals (chinchillas, gerbils, and humans) (80-150 Hz) and dolphins (1,000-1,200 Hz). Audiograms estimated from the input-output functions of the EPs greatly underestimate behavioral hearing thresholds measured in two other manatees. This underestimation is probably due to the electrodes being located several centimeters from the brain. PMID:16001184

  2. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  3. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  4. Improving video processing performance using temporal reasoning

    Science.gov (United States)

    Ahmed, Mohamed; Karmouch, Ahmed

    1999-10-01

    In this paper, we present a system, called MediABS, for extracting key frames in a video segment. First we will describe the overall architecture of the system and we will show how our system can handle multiple video formats with a single video-processing module. Then we will present a new algorithm, based on color histograms. The algorithm exploits the temporal characteristic of the visual information and provides techniques for avoiding false cuts and eliminating the possibility of missing true cuts. A discussion, along with some results, will be provided to show the merits of our algorithm compared to existing related algorithms. Finally we will discuss the performance (in terms of processing time and accuracy) obtained by our system in extracting the key frames from a video segment. This work is part of the Mobile Agents Alliance project involving University of Ottawa, National Research Council (NRC) and Mitel Corporation.

  5. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Directory of Open Access Journals (Sweden)

    Jiagui Qu

    Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  6. Entropical Aspects in Auditory Processes and Psychoacoustical Law of Weber-Fechner

    Science.gov (United States)

    Cosma, I.; Popescu, D. I.

    For hearing sense, the mechanoreceptors fire action potentials when their membranes are physically stretched. Based on the statistical physics, we analyzed the entropical aspects in auditory processes of hearing. We develop a model that connects the logarithm of relative intensity of sound (loudness) to the level of energy disorder within the system of cellular sensory system. The increasing of entropy and disorder in the system is connected to the free energy available to signal the production of action potentials in inner hair cells of the vestibulocochlear auditory organ.

  7. Individual differences in the discrimination of novel speech sounds: effects of sex, temporal processing, musical and cognitive abilities.

    Science.gov (United States)

    Kempe, Vera; Thoresen, John C; Kirk, Neil W; Schaeffler, Felix; Brooks, Patricia J

    2012-01-01

    This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds. PMID:23139806

  8. Individual differences in the discrimination of novel speech sounds: effects of sex, temporal processing, musical and cognitive abilities.

    Directory of Open Access Journals (Sweden)

    Vera Kempe

    Full Text Available This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120 discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.

  9. CONTRALATERAL SUPPRESSION OF DISTORTION PRODUCT OTOACOUSTIC EMISSION IN CHILDREN WITH AUDITORY PROCESSING DISORDERS

    Institute of Scientific and Technical Information of China (English)

    Jessica Oppee; SUN Wei; Nancy Stecker

    2014-01-01

    Previous research has demonstrated that the amplitude of evoked emissions decreases in human sub-jects when the contralateral ear is stimulated by noise. The medial olivocochlear bundle (MOCB) is be-lieved to control this phenomenon. Recent research has examined this effect in individuals with auditory pro-cessing disorders (APD), specifically with difficulty understanding speech in noise. Results showed tran-sient evoked otoacoustic emissions (TEOAEs) were not affected by contralateral stimulation in these sub-jects. Much clinical research has measured the function of the MOCB through TEOAEs.This study will use an alternative technique, distortion product otoacoustic emissions (DPOAEs), to examine this phenomenon and evaluate the function of the MOCB. DPOAEs of individuals in a control group with normal hearing and no significant auditory processing difficulties were compared to the DPOAEs of children with signifi-cant auditory processing difficulties.Results showed that the suppression effect was observed in the control group at 2 kHz with 3 kHz of narrowband noise. For the auditory processing disorders group, no significant suppression was observed.Overall, DPOAEs showed suppression with contralateral noise, while the APD group levels increased overall.These results provide further evidence that the MOCB may have reduced function in children with APD.

  10. Short-Term Memory and Auditory Processing Disorders: Concurrent Validity and Clinical Diagnostic Markers

    Science.gov (United States)

    Maerlender, Arthur

    2010-01-01

    Auditory processing disorders (APDs) are of interest to educators and clinicians, as they impact school functioning. Little work has been completed to demonstrate how children with APDs perform on clinical tests. In a series of studies, standard clinical (psychometric) tests from the Wechsler Intelligence Scale for Children, Fourth Edition…

  11. Age, dyslexia subtype and comorbidity modulate rapid auditory processing in developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Maria Luisa eLorusso

    2014-05-01

    Full Text Available The nature of Rapid Auditory Processing (RAP deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of nonverbal tone sequences.Participants were 46 children aged 8 - 14 (26 with dyslexia, subdivided according to age, presence of a previous language delay, and type of dyslexia. Experimental tasks were a Temporal Order Judgment (TOJ (manipulating tone length, ISI and sequence length, and a Pattern Discrimination Task. Dyslexic children showed general RAP deficits. Tone length and ISI influenced dyslexic and control children’s performance in a similar way, but dyslexic children were more affected by an increase from 2 to 5 sounds. As to age, older dyslexic children’s difficulty in reproducing sequences of 4 and 5 tones was similar to that of normally reading younger (but not older children. In the analysis of subgroup profiles, the crucial variable appears to be the advantage, or lack thereof, in processing long vs short sounds. Dyslexic children with a previous language delay obtained the lowest scores in RAP measures, but they performed worse with shorter stimuli, similar to control children, while dyslexic-only children showed no advantage for longer stimuli. As to dyslexia subtype, only surface dyslexics improved their performance with longer stimuli, while phonological dyslexics did not. Differential scores for short vs long tones and for long vs short ISIs predict nonword and word reading, respectively, and the former correlate with phonemic awareness.In conclusion, the relationship between nonverbal RAP, phonemic skills and reading abilities appears to be characterized by complex interactions with

  12. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W.; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  13. Event-related desynchronization of frontal-midline theta rhythm during preconscious auditory oddball processing.

    Science.gov (United States)

    Kawamata, Masaru; Kirino, Eiji; Inoue, Reiichi; Arai, Heii

    2007-10-01

    The goal of this study was to explore the frontal-midline theta rhythm (Fm theta) generation mechanism employing event-related desynchronization/synchronization (ERD/ERS) analysis in relation to task-irrelevant external stimuli. A dual paradigm was employed: a videogame and the simultaneous presentation of passive auditory oddball stimuli. We analyzed the data concerning ERD/ERS using both Fast Fourier Transformation (FFT) and wavelet transform (WT). In the FFT data, during the periods with appearance of Fm theta, apparent ERD of the theta band was observed at Fz and Cz. ERD when Fm theta was present was much more prominent than when Fm theta was absent. In the WT data, as in the FFT data, ERD was seen again, but in this case the ERD was preceded by ERS during both the periods with and without Fm theta. Furthermore, the WT analysis indicated that ERD was followed by ERS during the periods without Fm theta. However, during Fm theta, no apparent ERS following ERD was seen. In our study, Fm theta was desynchronized by the auditory stimuli that were independent of the video game task used to evoke the Fm theta. The ERD of Fm theta might be reflecting the mechanism of "positive suppression" to process external auditory stimuli automatically and preventing attentional resources from being unnecessarily allocated to those stimuli. Another possibility is that Fm theta induced by our dual paradigm may reflect information processing modeled by multi-item working memory requirements for playing the videogame and the simultaneous auditory processing using a memory trace. ERS in the WT data without Fm theta might indicate further processing of the auditory information free from "positive suppression" control reflected by Fm theta. PMID:17993201

  14. Aphasia and Auditory Processing after Stroke through an International Classification of Functioning, Disability and Health Lens.

    Science.gov (United States)

    Purdy, Suzanne C; Wanigasekara, Iruni; Cañete, Oscar M; Moore, Celia; McCann, Clare M

    2016-08-01

    Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke. PMID:27489401

  15. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.

    Science.gov (United States)

    Mai, Guangting; Minett, James W; Wang, William S-Y

    2016-06-01

    A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG signatures relevant to δ, θ, β, and γ oscillations can index phonological and semantic/syntactic organizations separately in auditory sentence processing, compatible with the view that phonological and higher-level linguistic processing engage distinct neural networks. PMID:26931813

  16. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy.

    Science.gov (United States)

    Rocha, Caroline Nunes; Miziara, Carmen Silvia Molleis Galego; Manreza, Maria Luiza Giraldes de; Schochat, Eliane

    2010-02-01

    The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE) for subjects with left mesial temporal sclerosis (LMTS) in relation to the behavioral test-Dichotic Digits Test (DDT), event-related potential (P300), and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.

  17. Inducing attention not to blink: auditory entrainment improves conscious visual processing.

    Science.gov (United States)

    Ronconi, Luca; Pincham, Hannah L; Szűcs, Dénes; Facoetti, Andrea

    2016-09-01

    Our ability to allocate attention at different moments in time can sometimes fail to select stimuli occurring in close succession, preventing visual information from reaching awareness. This so-called attentional blink (AB) occurs when the second of two targets (T2) is presented closely after the first (T1) in a rapid serial visual presentation (RSVP). We hypothesized that entrainment to a rhythmic stream of stimuli-before visual targets appear-would reduce the AB. Experiment 1 tested the effect of auditory entrainment by presenting sounds with a regular or irregular interstimulus interval prior to a RSVP where T1 and T2 were separated by three possible lags (1, 3 and 8). Experiment 2 examined visual entrainment by presenting visual stimuli in place of auditory stimuli. Results revealed that irrespective of sensory modality, arrhythmic stimuli preceding the RSVP triggered an alerting effect that improved the T2 identification at lag 1, but impaired the recovery from the AB at lag 8. Importantly, only auditory rhythmic entrainment was effective in reducing the AB at lag 3. Our findings demonstrate that manipulating the pre-stimulus condition can reduce deficits in temporal attention characterizing the human cognitive architecture, suggesting innovative trainings for acquired and neurodevelopmental disorders. PMID:26215434

  18. Peripheral auditory processing changes seasonally in Gambel's white-crowned sparrow.

    Science.gov (United States)

    Caras, Melissa L; Brenowitz, Eliot; Rubel, Edwin W

    2010-08-01

    Song in oscine birds is a learned behavior that plays important roles in breeding. Pronounced seasonal differences in song behavior and in the morphology and physiology of the neural circuit underlying song production are well documented in many songbird species. Androgenic and estrogenic hormones largely mediate these seasonal changes. Although much work has focused on the hormonal mechanisms underlying seasonal plasticity in songbird vocal production, relatively less work has investigated seasonal and hormonal effects on songbird auditory processing, particularly at a peripheral level. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a highly seasonal breeder. Photoperiod and hormone levels were manipulated in the laboratory to simulate natural breeding and non-breeding conditions. Peripheral auditory function was assessed by measuring the auditory brainstem response (ABR) and distortion product otoacoustic emissions (DPOAEs) of males and females in both conditions. Birds exposed to breeding-like conditions demonstrated elevated thresholds and prolonged peak latencies when compared with birds housed under non-breeding-like conditions. There were no changes in DPOAEs, however, which indicates that the seasonal differences in ABRs do not arise from changes in hair cell function. These results suggest that seasons and hormones impact auditory processing as well as vocal production in wild songbirds.

  19. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  20. Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

    OpenAIRE

    Ellis, Robert J.; Zhiyan Duan; Ye Wang

    2014-01-01

    "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). A...

  1. Comparison of LFP-Based and Spike-Based Spectro-Temporal Receptive Fields and Cross-Correlation in Cat Primary Auditory Cortex

    OpenAIRE

    Eggermont, Jos J.; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that ...

  2. Cognitive components of regularity processing in the auditory domain

    OpenAIRE

    Stefan Koelsch; Daniela Sammler

    2008-01-01

    BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'). METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians) were investi...

  3. Interaural cross correlation of event-related potentials and diffusion tensor imaging in the evaluation of auditory processing disorder: a case study.

    Science.gov (United States)

    Jerger, James; Martin, Jeffrey; McColl, Roderick

    2004-01-01

    In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103

  4. Visual or Auditory Processing Style and Strategy Effectiveness.

    Science.gov (United States)

    Weed, Keri; Ryan, Ellen Bouchard

    In a study that investigated differences in the processing styles of beginning readers, a Pictograph Sentence Memory Test (PSMT) was administered to first and second grade students to determine their processing style as well as to assess instructional effects. Based on their responses to the PSMT, the children were classified as either visual or…

  5. The influence of visual information on auditory processing in individuals with congenital amusia: An ERP study.

    Science.gov (United States)

    Lu, Xuejing; Ho, Hao T; Sun, Yanan; Johnson, Blake W; Thompson, William F

    2016-07-15

    While most normal hearing individuals can readily use prosodic information in spoken language to interpret the moods and feelings of conversational partners, people with congenital amusia report that they often rely more on facial expressions and gestures, a strategy that may compensate for deficits in auditory processing. In this investigation, we used EEG to examine the extent to which individuals with congenital amusia draw upon visual information when making auditory or audio-visual judgments. Event-related potentials (ERP) were elicited by a change in pitch (up or down) between two sequential tones paired with a change in spatial position (up or down) between two visually presented dots. The change in dot position was either congruent or incongruent with the change in pitch. Participants were asked to judge (1) the direction of pitch change while ignoring the visual information (AV implicit task), and (2) whether the auditory and visual changes were congruent (AV explicit task). In the AV implicit task, amusic participants performed significantly worse in the incongruent condition than control participants. ERPs showed an enhanced N2-P3 response to incongruent AV pairings for control participants, but not for amusic participants. However when participants were explicitly directed to detect AV congruency, both groups exhibited enhanced N2-P3 responses to incongruent AV pairings. These findings indicate that amusics are capable of extracting information from both modalities in an AV task, but are biased to rely on visual information when it is available, presumably because they have learned that auditory information is unreliable. We conclude that amusic individuals implicitly draw upon visual information when judging auditory information, even though they have the capacity to explicitly recognize conflicts between these two sensory channels. PMID:27132045

  6. Musical intervention enhances infants' neural processing of temporal structure in music and speech.

    Science.gov (United States)

    Zhao, T Christina; Kuhl, Patricia K

    2016-05-10

    Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. PMID:27114512

  7. Musical intervention enhances infants’ neural processing of temporal structure in music and speech

    Science.gov (United States)

    Zhao, T. Christina; Kuhl, Patricia K.

    2016-01-01

    Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants’ neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants’ neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants’ neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. PMID:27114512

  8. Rhythmic processing in children with developmental dyslexia: auditory and motor rhythms link to reading and spelling.

    Science.gov (United States)

    Thomson, Jennifer M; Goswami, Usha

    2008-01-01

    Potential links between the language and motor systems in the brain have long attracted the interest of developmental psychologists. In this paper, we investigate a link often observed (e.g., [Wolff, P.H., 2002. Timing precision and rhythm in developmental dyslexia. Reading and Writing, 15 (1), 179-206.] between motor tapping and written language skills. We measure rhythmic finger tapping (paced by a metronome beat versus unpaced) and motor dexterity, phonological and auditory processing in 10-year old children, some of whom had a diagnosis of developmental dyslexia. We report links between paced motor tapping, auditory rhythmic processing and written language development. Motor dexterity does not explain these relationships. In regression analyses, paced finger tapping explained unique variance in reading and spelling. An interpretation based on the importance of rhythmic timing for both motor skills and language development is proposed. PMID:18448317

  9. Processing of spatial sounds in the impaired auditory system

    DEFF Research Database (Denmark)

    Arweiler, Iris

    information is not crucial. The results from an additional experiment demonstrated that the ER benefit was maintained with independent as well as with linked hearing aid compression. Overall, this work contributes to the understanding of ER processing in listeners with normal and impaired hearing and may have...... with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required...... for the integration of the ER’s with the DS. Different masker types were found to have an impact on the binaural processing of the overall speech signal but not on the processing of ER’s. Second, the influence of interaural level differences (ILD’s) on speech intelligibility was investigated with a hearing aid...

  10. He hears, she hears: are there sex differences in auditory processing?

    Science.gov (United States)

    Yoder, Kathleen M; Phan, Mimi L; Lu, Kai; Vicario, David S

    2015-03-01

    Songbirds learn individually unique songs through vocal imitation and use them in courtship and territorial displays. Previous work has identified a forebrain auditory area, the caudomedial nidopallium (NCM), that appears specialized for discriminating and remembering conspecific vocalizations. In zebra finches (ZFs), only males produce learned vocalizations, but both sexes process these and other signals. This study assessed sex differences in auditory processing by recording extracellular multiunit activity at multiple sites within NCM. Juvenile female ZFs (n = 46) were reared in individual isolation and artificially tutored with song. In adulthood, songs were played back to assess auditory responses, stimulus-specific adaptation, neural bias for conspecific song, and memory for the tutor's song, as well as recently heard songs. In a subset of females (n = 36), estradiol (E2) levels were manipulated to test the contribution of E2, known to be synthesized in the brain, to auditory responses. Untreated females (n = 10) showed significant differences in response magnitude and stimulus-specific adaptation compared to males reared in the same paradigm (n = 9). In hormone-manipulated females, E2 augmentation facilitated the memory for recently heard songs in adulthood, but neither E2 augmentation (n = 15) nor E2 synthesis blockade (n = 9) affected tutor song memory or the neural bias for conspecific song. The results demonstrate subtle sex differences in processing communication signals, and show that E2 levels in female songbirds can affect the memory for songs of potential suitors, thus contributing to the process of mate selection. The results also have potential relevance to clinical interventions that manipulate E2 in human patients. PMID:25220950

  11. Right cerebral hemisphere and central auditory processing in children with developmental dyslexia

    OpenAIRE

    Paulina C. Murphy-Ruiz; Yolanda R. Penaloza-Lopez; Felipe Garcia-Pedroza; Adrian Poblano

    2013-01-01

    Objective We hypothesized that if the right hemisphere auditory processing abilities can be altered in children with developmental dyslexia (DD), we can detect dysfunction using specific tests. Method We performed an analytical comparative cross-sectional study. We studied 20 right-handed children with DD and 20 healthy right-handed control subjects (CS). Children in both groups were age, gender, and school-grade matched. Focusing on the right hemisphere’s contribution, we utilized tests to...

  12. Coevolution in communication senders and receivers: vocal behavior and auditory processing in multiple songbird species

    OpenAIRE

    Woolley, Sarah M. N.; Moore, Jordan M.

    2011-01-01

    Communication is a strong selective pressure on brain evolution because the exchange of information between individuals is crucial for fitness-related behaviors, such as mating. Given the importance of communication, the brains of signal senders and receivers are likely to be functionally coordinated. We study vocal behavior and auditory processing in multiple species of estrildid finches with the goal of understanding how species identity and early experience interact to shape the neural sys...

  13. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    OpenAIRE

    Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...

  14. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    OpenAIRE

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...

  15. Cognitive components of regularity processing in the auditory domain.

    Directory of Open Access Journals (Sweden)

    Stefan Koelsch

    Full Text Available BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians were investigated with electroencephalography (EEG. Irregular chords elicited an early right anterior negativity (ERAN in the event-related brain potentials (ERPs. The ERAN had a latency of around 180 ms after the onset of the music-syntactically irregular chords, and had maximum amplitude values over right anterior electrode sites. CONCLUSIONS/SIGNIFICANCE: Because irregular chords were hardly detectable based on acoustical factors (such as pitch repetition and sensory dissonance, this ERAN effect reflects for the most part cognitive (not sensory components of regularity-based, music-syntactic processing. Our study represents a methodological advance compared to previous ERP-studies investigating the neural processing of music-syntactically irregular chords.

  16. Video Game Players Show More Precise Multisensory Temporal Processing Abilities

    OpenAIRE

    Donohue, Sarah E.; Marty G Woldorff; Stephen R Mitroff

    2010-01-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. The current study examined whether video game players’ benefits generalize beyond vision to multisensory processing by presenting video game players and non-video game players auditory and visual stim...

  17. Towards Low-Power On-chip Auditory Processing

    Directory of Open Access Journals (Sweden)

    Paul Hasler

    2005-05-01

    Full Text Available Machine perception is a difficult problem both from a practical or implementation point of view as well as from a theoretical or algorithm point of view. Machine perception systems based on biological perception systems show great promise in many areas but they often have processing requirements and/or data flow requirements that are difficult to implement, especially in small or low-power systems. We propose a system design approach that makes it possible to implement complex functionality using cooperative analog-digital signal processing to lower-power requirements dramatically over digital-only systems, as well as provide an architecture facilitating the development of biologically motivated perception systems. We show the architecture and application development approach. We also present several reference systems for speech recognition, noise suppression, and audio classification.

  18. Auditory Processing in Noise: A Preschool Biomarker for Literacy

    OpenAIRE

    White-Schwoch, Travis; Woodruff Carr, Kali; Thompson, Elaine C.; Anderson, Samira; Nicol, Trent; Bradlow, Ann R.; Zecker, Steven G.; Kraus, Nina

    2015-01-01

    Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child’s future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to t...

  19. Multiple benefits of personal FM system use by children with auditory processing disorder (APD).

    Science.gov (United States)

    Johnston, Kristin N; John, Andrew B; Kreisman, Nicole V; Hall, James W; Crandell, Carl C

    2009-01-01

    Children with auditory processing disorders (APD) were fitted with Phonak EduLink FM devices for home and classroom use. Baseline measures of the children with APD, prior to FM use, documented significantly lower speech-perception scores, evidence of decreased academic performance, and psychosocial problems in comparison to an age- and gender-matched control group. Repeated measures during the school year demonstrated speech-perception improvement in noisy classroom environments as well as significant academic and psychosocial benefits. Compared with the control group, the children with APD showed greater speech-perception advantage with FM technology. Notably, after prolonged FM use, even unaided (no FM device) speech-perception performance was improved in the children with APD, suggesting the possibility of fundamentally enhanced auditory system function. PMID:19925345

  20. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams.

    Science.gov (United States)

    Bezgin, Gleb; Rybacki, Konrad; van Opstal, A John; Bakker, Rembrandt; Shen, Kelly; Vakorin, Vasily A; McIntosh, Anthony R; Kötter, Rolf

    2014-08-01

    Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception. PMID:24980416

  1. Psychophysical Estimates of Frequency Discrimination: More than Just Limitations of Auditory Processing

    Directory of Open Access Journals (Sweden)

    Beate Sabisch

    2013-07-01

    Full Text Available Efficient auditory processing is hypothesized to support language and literacy development. However, behavioral tasks used to assess this hypothesis need to be robust to non-auditory specific individual differences. This study compared frequency discrimination abilities in a heterogeneous sample of adults using two different psychoacoustic task designs, referred to here as: 2I_6A_X and 3I_2AFC designs. The role of individual differences in nonverbal IQ (NVIQ, socioeconomic status (SES and musical experience in predicting frequency discrimination thresholds on each task were assessed using multiple regression analyses. The 2I_6A_X task was more cognitively demanding and hence more susceptible to differences specifically in SES and musical training. Performance on this task did not, however, relate to nonword repetition ability (a measure of language learning capacity. The 3I_2AFC task, by contrast, was only susceptible to musical training. Moreover, thresholds measured using it predicted some variance in nonword repetition performance. This design thus seems suitable for use in studies addressing questions regarding the role of auditory processing in supporting language and literacy development.

  2. Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus

    Directory of Open Access Journals (Sweden)

    Francine eFoo

    2016-04-01

    Full Text Available The auditory cortex is well known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG signals directly from the lateral surface of either the left or right temporal lobe of 8 patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz frequency range within the superior temporal gyrus (STG and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75-200ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity towards dissonance.

  3. High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat.

    Directory of Open Access Journals (Sweden)

    Blake E Butler

    Full Text Available The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential "pitch centres" in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo. Pitch-related activity was not observed to occur in either primary auditory cortex (A1 or the anterior auditory field (AAF which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF. This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception.

  4. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling.

  5. Temporal Expectation and Information Processing: A Model-Based Analysis

    Science.gov (United States)

    Jepma, Marieke; Wagenmakers, Eric-Jan; Nieuwenhuis, Sander

    2012-01-01

    People are able to use temporal cues to anticipate the timing of an event, enabling them to process that event more efficiently. We conducted two experiments, using the fixed-foreperiod paradigm (Experiment 1) and the temporal-cueing paradigm (Experiment 2), to assess which components of information processing are speeded when subjects use such…

  6. White matter microstructure is associated with auditory and tactile processing in children with and without sensory processing disorder

    Directory of Open Access Journals (Sweden)

    Yi Shin Chang

    2016-01-01

    Full Text Available Sensory processing disorders (SPD affect up to 16% of school-aged children, and contribute to cognitive and behavioral deficits impacting affected individuals and their families. While sensory processing differences are now widely recognized in children with autism, children with sensory-based dysfunction who do not meet autism criteria based on social communication deficits remain virtually unstudied. In a previous pilot diffusion tensor imaging (DTI study, we demonstrated that boys with SPD have altered white matter microstructure primarily affecting the posterior cerebral tracts, which subserve sensory processing and integration. This disrupted microstructural integrity, measured as reduced white matter fractional anisotropy (FA, correlated with parent report measures of atypical sensory behavior. In this present study, we investigate white matter microstructure as it relates to tactile and auditory function in depth with a larger, mixed-gender cohort of children 8 to 12 years of age. We continue to find robust alterations of posterior white matter microstructure in children with SPD relative to typically developing children, along with more spatially distributed alterations. We find strong correlations of FA with both parent report and direct measures of tactile and auditory processing across children, with the direct assessment measures of tactile and auditory processing showing a stronger and more continuous mapping to the underlying white matter integrity than the corresponding parent report measures. Based on these findings of microstructure as a neural correlate of sensory processing ability, diffusion MRI merits further investigation as a tool to find biomarkers for diagnosis, prognosis and treatment response in children with SPD. To our knowledge, this work is the first to demonstrate associations of directly measured tactile and non-linguistic auditory function with white matter microstructural integrity -- not just in children with

  7. Avaliação do processamento auditivo em crianças com dificuldades de aprendizagem Auditory processing evaluation in children with learning difficulties

    Directory of Open Access Journals (Sweden)

    Lucilene Engelmann

    2009-01-01

    Full Text Available OBJETIVO: Esclarecer a relação entre dificuldades de aprendizagem e o transtorno do processamento auditivo em uma turma de segunda série. MÉTODOS: Através da aplicação de testes de leitura os alunos foram classificados quanto à fluência em leitura, sendo um com maior fluência (grupo A e outro com menor fluência (grupo B. Os testes de processamento auditivo foram comparados entre os grupos. RESULTADOS: Todos os participantes apresentaram dificuldades de aprendizagem e transtorno do processamento auditivo em quase todos os subperfis primários. Verificou-se que a variável memória sequencial verbal do grupo de menor fluência em leitura (grupo B foi significantemente melhor (p=0,030. CONCLUSÃO: Questiona-se o diagnóstico de transtorno primário do processamento auditivo e salienta-se a importância da memória sequencial verbal no aprendizado da leitura e escrita. Em face do que foi observado, mais pesquisas deverão ser realizadas objetivando o estudo dessa variável e sua relação com o processamento auditivo temporal.PURPOSE: To clarify the relationship between learning difficulties and auditory processing disorder in second grade students. METHODS: Based on the application of reading tests, the students of a second grade class of an elementary school were classified into two groups, according to their reading fluency: a group with better fluency (group A and another with less fluency (group B. A between-group analysis of the auditory processing tests was carried out. RESULTS: All participants presented learning difficulties and auditory processing disorder in almost every primary subprofiles. It was observed that the verbal sequential memory abilities of the less fluent group (group B was significantly better (p=0,030. CONCLUSION: The diagnosis of primary auditory processing disorder is questioned, and it is emphasized the importance of stimulating verbal sequential memory to the learning of reading and writing abilities. In

  8. Simple ears-flexible behavior: Information processing in the moth auditory pathway

    Institute of Scientific and Technical Information of China (English)

    Gerit PFUHL; Blanka KALINOVA; Irena VALTEROVA; Bente G.BERG

    2015-01-01

    Lepidoptera evolved tympanic ears in response to echolocating bats.Comparative studies have shown that moth ears evolved many times independently from chordotonal organs.With only 1 to 4 receptor cells,they are one of the simplest hearing organs.The small number of receptors does not imply simplicity,neither in behavior nor in the neural circuit.Behaviorally,the response to ultrasound is far from being a simple reflex.Moths' escape behavior is modulated by a variety of cues,especially pheromones,which can alter the auditory response.Neurally the receptor cell(s) diverges onto many intemeurons,enabling pa rallel processing and feature extraction.Ascending interneurons and sound-sensitive brain neurons innervate a neuropil in the ventrolateral protocerebrum.Further,recent electrophysiological data provides the first glimpses into how the acoustic response is modulated as well as how ultrasound influences the other senses.So far,the auditory pathway has been studied in noctuids.The findings agree well with common computational principles found in other insects.However,moth ears also show unique mechanical and neural adaptation.Here,we first describe the variety of moths' auditory behavior,especially the co-option of ultrasonic signals for intraspecific communication.Second,we describe the current knowledge of the neural pathway gained from noctuid moths.Finally,we argue that Galleriinae which show negative and positive phonotaxis,are an interesting model species for future electrophysiological studies of the auditory pathway and multimodal sensory integration,and so are ideally suited for the study of the evolution of behavioral mechanisms given a few receptors [Current Zoology 61 (2):292-302,2015].

  9. Sensorimotor nucleus NIf is necessary for auditory processing but not vocal motor output in the avian song system.

    Science.gov (United States)

    Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F

    2005-04-01

    Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.

  10. Electrophysiological assessment of auditory processing disorder in children with non-syndromic cleft lip and/or palate

    Science.gov (United States)

    McPherson, Bradley; Ma, Lian

    2016-01-01

    Objectives Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P). However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft disorders. The present study utilized electrophysiological tests to assess the auditory status of a large group of children with NSCL/P, and investigated whether this group had less robust central auditory processing abilities compared to craniofacially normal children. Methods 146 children with NSCL/P who had normal peripheral hearing thresholds, and 60 craniofacially normal children aged from 6 to 15 years, were recruited. Electrophysiological tests, including auditory brainstem response (ABR), P1-N1-P2 complex, and P300 component recording, were conducted. Results ABR and N1 wave latencies were significantly prolonged in children with NSCL/P. An atypical developmental trend was found for long latency potentials in children with cleft compared to control group children. Children with unilateral cleft lip and palate showed a greater level of abnormal results compared with other cleft subgroups, whereas the cleft lip subgroup had the most robust responses for all tests. Conclusion Children with NSCL/P may have slower than normal neural transmission times between the peripheral auditory nerve and brainstem. Possible delayed development of myelination and synaptogenesis may also influence auditory processing function in this population. Present research outcomes were consistent with previous, smaller sample size, electrophysiological studies on infants and children with cleft lip/palate disorders. In view of the these findings, and reports of educational disadvantage associated

  11. Temporal processes involved in simultaneous reflection masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg

    2006-01-01

    Reflection masking refers to the specific masking condition where a test reflection is masked by the direct sound. Employing reflection masking techniques, Buchholz [J. Acoust. Soc. Am. 117, 2484 (2005)] provided evidence that the binaural system suppresses the test reflection for very short...... reflection delays and enhances the test reflection for large delays. Employing a 200-ms-long broadband noise burst as input signal, the critical delay separating these two binaural phenomena was found to be 7–10 ms. It was suggested that the critical delay refers to a temporal window that is employed...

  12. The quest for universals in temporal processing in music.

    Science.gov (United States)

    Drake, C; Bertrand, D

    2001-06-01

    Music perception and performance rely heavily on temporal processing: for instance, each event must be situated in time in relation to surrounding events, and events must be grouped together in order to overcome memory constraints. The temporal structure of music varies considerably from one culture to another, and so it has often been supposed that the specific implementation of perceptual and cognitive temporal processes will differ as a function of an individual's cultural exposure and experience. In this paper we examine the alternative position that some temporal processes may be universal, in the sense that they function in a similar manner irrespective of an individual's cultural exposure and experience. We first review rhythm perception and production studies carried out with adult musicians, adult nonmusicians, children, and infants in order to identify temporal processes that appear to function in a similar fashion irrespective of age, acculturation, and musical training. This review leads to the identification of five temporal processes that we submit as candidates for the status of "temporal universals." For each process, we select the simplest and most representative experimental paradigm that has been used to date. This leads to a research proposal for future intercultural studies that could test the universal nature of these processes. PMID:11458828

  13. Basic auditory processing is related to familial risk, not to reading fluency: an ERP study.

    Science.gov (United States)

    Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L

    2015-02-01

    Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART), intensity and frequency differences was measured with event-related potentials (ERPs). ERP components of interest are components reflective of auditory change detection; the mismatch negativity (MMN) and late discriminative negativity (LDN). All groups had an MMN to changes in ART and frequency, but not to intensity. Our results indicate that fluent readers at risk for dyslexia, poor readers at risk for dyslexia and fluent reading controls have an LDN to changes in ART and frequency, though the scalp activation of frequency processing was different for familial risk children. On intensity, only controls showed an LDN. Contrary to previous findings, our results suggest that neither ART nor frequency processing is related to reading fluency. Furthermore, our results imply that diminished sensitivity to changes in intensity and differential lateralization of frequency processing should be regarded as correlates of being at familial risk for dyslexia, that do not directly relate to reading fluency.

  14. Basic auditory processing is related to familial risk, not to reading fluency: an ERP study.

    Science.gov (United States)

    Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L

    2015-02-01

    Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART), intensity and frequency differences was measured with event-related potentials (ERPs). ERP components of interest are components reflective of auditory change detection; the mismatch negativity (MMN) and late discriminative negativity (LDN). All groups had an MMN to changes in ART and frequency, but not to intensity. Our results indicate that fluent readers at risk for dyslexia, poor readers at risk for dyslexia and fluent reading controls have an LDN to changes in ART and frequency, though the scalp activation of frequency processing was different for familial risk children. On intensity, only controls showed an LDN. Contrary to previous findings, our results suggest that neither ART nor frequency processing is related to reading fluency. Furthermore, our results imply that diminished sensitivity to changes in intensity and differential lateralization of frequency processing should be regarded as correlates of being at familial risk for dyslexia, that do not directly relate to reading fluency. PMID:25243992

  15. 听觉皮层信号处理%Information processing in auditory cortex

    Institute of Scientific and Technical Information of China (English)

    王晓勤

    2009-01-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex include two types of important transformations: the non-isomorphic transformation of acoustic features and the transformation from acoustical to perceptual dimensions. Neural representations in auditory cortex are also modulated by auditory feedback and vocal control signals during speaking or vocalization. The challenges facing auditory neuroscientists and biomedical engineers are to understand neural coding mechanisms in the brain underlying such transformations. I will use recent findings from my laboratory to illustrate how acoustic information is processed in the primate auditory cortex and discuss its implications for neural processing of speech and music in the brain as well as for the design of neural prosthetic devices such as cochlear implants. We have used a combination of neurophysiological techniques and quantitative engineering tools to investigate these problems.%听觉系统和视觉系统的不同之处在于:听觉系统在外周感受器和听皮层间具有更长的皮层下通路和更多的突触联系.该特殊结构反应了听觉系统从复杂听觉环境中提取与行为相关信号的机制与其他感觉系统不同.听皮层神经信号处理包括两种重要的转换机制,声音信号的非同构转换以及从声音感受到知觉层面的转换.听觉皮层神经编码机制同时也受到听觉反馈和语言或发声过程中发声信号的调控.听觉神经科学家和生物医学工程师所面临的挑战便是如何去理解大脑中这些转换的编码机制.我将会用我实验

  16. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten;

    2007-01-01

    . In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model....... The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus IC neurons. In contrast...... to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies BMFs of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units...

  17. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Miyako

    1988-07-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone.

  18. Neurite-specific Ca2+ dynamics underlying sound processing in an auditory interneurone.

    Science.gov (United States)

    Baden, T; Hedwig, B

    2007-01-01

    Concepts on neuronal signal processing and integration at a cellular and subcellular level are driven by recording techniques and model systems available. The cricket CNS with the omega-1-neurone (ON1) provides a model system for auditory pattern recognition and directional processing. Exploiting ON1's planar structure we simultaneously imaged free intracellular Ca(2+) at both input and output neurites and recorded the membrane potential in vivo during acoustic stimulation. In response to a single sound pulse the rate of Ca(2+) rise followed the onset spike rate of ON1, while the final Ca(2+) level depended on the mean spike rate. Ca(2+) rapidly increased in both dendritic and axonal arborizations and only gradually in the axon and the cell body. Ca(2+) levels were particularly high at the spike-generating zone. Through the activation of a Ca(2+)-sensitive K(+) current this may exhibit a specific control over the cell's electrical response properties. In all cellular compartments presentation of species-specific calling song caused distinct oscillations of the Ca(2+) level in the chirp rhythm, but not the faster syllable rhythm. The Ca(2+)-mediated hyperpolarization of ON1 suppressed background spike activity between chirps, acting as a noise filter. During directional auditory processing, the functional interaction of Ca(2+)-mediated inhibition and contralateral synaptic inhibition was demonstrated. Upon stimulation with different sound frequencies, the dendrites, but not the axonal arborizations, demonstrated a tonotopic response profile. This mirrored the dominance of the species-specific carrier frequency and resulted in spatial filtering of high frequency auditory inputs.

  19. Changes in Electroencephalogram Approximate Entropy Reflect Auditory Processing and Functional Complexity in Frogs

    Institute of Scientific and Technical Information of China (English)

    Yansu LIU; Yanzhu FAN; Fei XUE; Xizi YUE; Steven E BRAUTH; Yezhong TANG; Guangzhan FANG

    2016-01-01

    Brain systems engage in what are generally considered to be among the most complex forms of information processing. In the present study, we investigated the functional complexity of anuran auditory processing using the approximate entropy (ApEn) protocol for electroencephalogram (EEG) recordings from the forebrain and midbrain while male and female music frogs (Babina daunchina) listened to acoustic stimuli whose biological significance varied. The stimuli used were synthesized white noise (reflecting a novel signal), conspecific male advertisement calls with either high or low sexual attractiveness (relfecting sexual selection) and silence (relfecting a baseline). The results showed that 1) ApEn evoked by conspeciifc calls exceeded ApEn evoked by synthesized white noise in the left mesencephalon indicating this structure plays a critical role in processing acoustic signals with biological signiifcance;2) ApEn in the mesencephalon was significantly higher than for the telencephalon, consistent with the fact that the anuran midbrain contains a large well-organized auditory nucleus (torus semicircularis) while the forebrain does not; 3) for females ApEn in the mesencephalon was signiifcantly different than that of males, suggesting that males and females process biological stimuli related to mate choice differently.

  20. Mutation of Dcdc2 in mice leads to impairments in auditory processing and memory ability.

    Science.gov (United States)

    Truong, D T; Che, A; Rendall, A R; Szalkowski, C E; LoTurco, J J; Galaburda, A M; Holly Fitch, R

    2014-11-01

    Dyslexia is a complex neurodevelopmental disorder characterized by impaired reading ability despite normal intellect, and is associated with specific difficulties in phonological and rapid auditory processing (RAP), visual attention and working memory. Genetic variants in Doublecortin domain-containing protein 2 (DCDC2) have been associated with dyslexia, impairments in phonological processing and in short-term/working memory. The purpose of this study was to determine whether sensory and behavioral impairments can result directly from mutation of the Dcdc2 gene in mice. Several behavioral tasks, including a modified pre-pulse inhibition paradigm (to examine auditory processing), a 4/8 radial arm maze (to assess/dissociate working vs. reference memory) and rotarod (to examine sensorimotor ability and motor learning), were used to assess the effects of Dcdc2 mutation. Behavioral results revealed deficits in RAP, working memory and reference memory in Dcdc2(del2/del2) mice when compared with matched wild types. Current findings parallel clinical research linking genetic variants of DCDC2 with specific impairments of phonological processing and memory ability.

  1. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  2. Auditory Brain Stem Processing in Reptiles and Amphibians: Roles of Coupled Ears

    DEFF Research Database (Denmark)

    Willis, Katie L.; Christensen-Dalsgaard, Jakob; Carr, Catherine

    2014-01-01

    Comparative approaches to the auditory system have yielded great insight into the evolution of sound localization circuits, particularly within the nonmammalian tetrapods. The fossil record demonstrates multiple appearances of tympanic hearing, and examination of the auditory brain stem of variou...

  3. The effectiveness of imagery and sentence strategy instructions as a function of visual and auditory processing in young school-age children.

    Science.gov (United States)

    Weed, K; Ryan, E B

    1985-12-01

    The relationship between auditory and visual processing modality and strategy instructions was examined in first- and second-grade children. A Pictograph Sentence Memory Test was used to determine dominant processing modality as well as to assess instructional effects. The pictograph task was given first followed by auditory or visual interference. Children who were disrupted more by visual interference were classed as visual processors and those more disrupted by auditory interference were classed as auditory processors. Auditory and visual processors were then assigned to one of three conditions: interactive imagery strategy, sentence strategy, or a control group. Children in the imagery and sentence strategy groups were briefly taught to integrate the pictographs in order to remember them better. The sentence strategy was found to be effective for both auditory and visual processors, whereas the interactive imagery strategy was effective only for auditory processors.

  4. Auditory target processing in methadone substituted opiate addicts: The effect of nicotine in controls

    Directory of Open Access Journals (Sweden)

    Zerbin Dieter

    2007-11-01

    Full Text Available Abstract Background The P300 component of the auditory evoked potential is an indicator of attention dependent target processing. Only a few studies have assessed cognitive function in substituted opiate addicts by means of evoked potential recordings. In addition, P300 data suggest that chronic nicotine use reduces P300 amplitudes. While nicotine and opiate effects combine in addicted subjects, here we investigated the P300 component of the auditory event related potential in methadone substituted opiate addicts with and without concomitant non-opioid drug use in comparison to a group of control subjects with and without nicotine consumption. Methods We assessed 47 opiate addicted out-patients under current methadone substitution and 65 control subjects matched for age and gender in an 2-stimulus auditory oddball paradigm. Patients were grouped for those with and without additional non-opioid drug use and controls were grouped for current nicotine use. P300 amplitude and latency data were analyzed at electrodes Fz, Cz and Pz. Results Patients and controls did not differ with regard to P300 amplitudes and latencies when whole groups were compared. Subgroup analyses revealed significantly reduced P300 amplitudes in controls with nicotine use when compared to those without. P300 amplitudes of methadone substituted opiate addicts were in between the two control groups and did not differ with regard to additional non-opioid use. Controls with nicotine had lower P300 amplitudes when compared to patients with concomitant non-opioid drugs. No P300 latency effects were found. Conclusion Attention dependent target processing as indexed by the P300 component amplitudes and latencies is not reduced in methadone substituted opiate addicts when compared to controls. The effect of nicotine on P300 amplitudes in healthy subjects exceeds the effects of long term opioid addiction under methadone substitution.

  5. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate sour...

  6. Results from a National Central Auditory Processing Disorder Service: A Real-World Assessment of Diagnostic Practices and Remediation for Central Auditory Processing Disorder.

    Science.gov (United States)

    Cameron, Sharon; Glyde, Helen; Dillon, Harvey; King, Alison; Gillies, Karin

    2015-11-01

    This article describes the development and evaluation of a national service to diagnose and remediate central auditory processing disorder (CAPD). Data were gathered from 38 participating Australian Hearing centers over an 18-month period from 666 individuals age 6, 0 (years, months) to 24, 8 (median 9, 0). A total of 408 clients were diagnosed with either a spatial processing disorder (n = 130), a verbal memory deficit (n = 174), or a binaural integration deficit (n = 104). A hierarchical test protocol was used so not all children were assessed on all tests in the battery. One hundred fifty clients decided to proceed with deficit-specific training (LiSN & Learn or Memory Booster) and/or be fitted with a frequency modulation system. Families were provided with communication strategies targeted to a child's specific listening difficulties and goals. Outcomes were measured using repeat assessment of the relevant diagnostic test, as well as the Client Oriented Scale of Improvement measure and Listening Inventories for Education teacher questionnaire. Group analyses revealed significant improvements postremediation for all training/management options. Individual posttraining performance and results of outcome measures also are discussed. PMID:27587910

  7. Oxytocin receptor gene associated with the efficiency of social auditory processing

    Directory of Open Access Journals (Sweden)

    Mattie eTops

    2011-11-01

    Full Text Available Oxytocin has been shown to facilitate social aspects of sensory processing, thereby enhancing social communicative behaviors and empathy. Here we report that compared to the AA/AG genotypes, the presumably more efficient GG genotype of an oxytocin receptor gene polymorphism (OXTR rs53576 that has previously been associated with increased sensitivity of social processing is related to less self-reported difficulty in hearing and understanding people when there is background noise. The present result extends associations between oxytocin and social processing to the auditory and vocal domain. We discuss the relevance of our findings for autistic spectrum disorders (ASD, as ASD seems related to specific impairments in the orienting to, and selection of speech sounds from background noise, and some social processing impairments in patients with ASD have been found responsive to oxytocin treatment.

  8. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  9. Right cerebral hemisphere and central auditory processing in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Paulina C. Murphy-Ruiz

    2013-11-01

    Full Text Available Objective We hypothesized that if the right hemisphere auditory processing abilities can be altered in children with developmental dyslexia (DD, we can detect dysfunction using specific tests. Method We performed an analytical comparative cross-sectional study. We studied 20 right-handed children with DD and 20 healthy right-handed control subjects (CS. Children in both groups were age, gender, and school-grade matched. Focusing on the right hemisphere’s contribution, we utilized tests to measure alterations in central auditory processing (CAP, such as determination of frequency patterns; sound duration; music pitch recognition; and identification of environmental sounds. We compared results among the two groups. Results Children with DD showed lower performance than CS in all CAP subtests, including those that preferentially engaged the cerebral right hemisphere. Conclusion Our data suggests a significant contribution of the right hemisphere in alterations of CAP in children with DD. Thus, right hemisphere CAP must be considered for examination and rehabilitation of children with DD.

  10. Processamento auditivo em idosos: implicações e soluções Auditory processing in elderly: implications and solutions

    Directory of Open Access Journals (Sweden)

    Leonardo Henrique Buss

    2010-02-01

    Full Text Available TEMA: processamento auditivo em idosos. OBJETIVO: estudar, através de uma revisão teórica, o processamento auditivo em idosos, as desordens que o envelhecimento auditivo causam, bem como os recursos para reduzir as defasagens nas habilidades auditivas envolvidas no processamento auditivo. CONCLUSÃO: vários são os desajustes ocasionados pela desordem do processamento auditivo em idosos. É necessária a continuidade de estudos científicos nessa área para aplicar adequadas medidas intervencionistas, a fim de garantir a reabilitação do indivíduo a tempo de minimizar os efeitos da desordem auditiva sobre o mesmo.BACKGROUND: auditory processing in elderly. PURPOSE: to promote a theoretical approach on auditory processing in elderly people, the disorders caused by hearing aging, as well as the resources to minimize the auditory aging impairment of the hearing abilities involved in the auditory processing. CONCLUSION: the alterations caused by auditory processing disorder in elderly people are many. It is necessary to continue researching in this field in order to apply adequate interventionist measures, in order to assure the rehabilitation of the individual in time to minimize the effects of the hearing disorder.

  11. Chinese-English bilinguals processing temporal-spatial metaphor.

    Science.gov (United States)

    Xue, Jin; Yang, Jie; Zhao, Qian

    2014-08-01

    The conceptual projection of time onto the domain of space constitutes one of the most challenging issues in the cognitive embodied theories. In Chinese, spatial order (e.g.,/da shu qian/, in front of a tree) shares the same terms with temporal sequence (", /san yue qian/, before March). In comparison, English natives use different sets of prepositions to describe spatial and temporal relationship, i.e., "before" to express temporal sequencing and "in front of" to express spatial order. The linguistic variations regarding the specific lexical encodings indicate that some flexibility might be available in how space-time parallelisms are formulated across different languages. In the present study, ERP (Event-related potentials) data were collected when Chinese-English bilinguals processed temporal ordering and spatial sequencing in both their first language (L1) Chinese (Experiment 1) and the second language (L2) English (Experiment 2). It was found that, despite the different lexical encodings, early sensorimotor simulation plays a role in temporal sequencing processing in both L1 Chinese and L2 English. The findings well support the embodied theory that conceptual knowledge is grounded in sensory-motor systems (Gallese and Lakoff, Cogn Neuropsychol 22:455-479, 2005). Additionally, in both languages, neural representations during comprehending temporal sequencing and spatial ordering are different. The time-spatial relationship is asymmetric, in that space schema could be imported into temporal sequence processing but not vice versa. These findings support the weak view of the Metaphoric Mapping Theory. PMID:24889328

  12. Chinese-English bilinguals processing temporal-spatial metaphor.

    Science.gov (United States)

    Xue, Jin; Yang, Jie; Zhao, Qian

    2014-08-01

    The conceptual projection of time onto the domain of space constitutes one of the most challenging issues in the cognitive embodied theories. In Chinese, spatial order (e.g.,/da shu qian/, in front of a tree) shares the same terms with temporal sequence (", /san yue qian/, before March). In comparison, English natives use different sets of prepositions to describe spatial and temporal relationship, i.e., "before" to express temporal sequencing and "in front of" to express spatial order. The linguistic variations regarding the specific lexical encodings indicate that some flexibility might be available in how space-time parallelisms are formulated across different languages. In the present study, ERP (Event-related potentials) data were collected when Chinese-English bilinguals processed temporal ordering and spatial sequencing in both their first language (L1) Chinese (Experiment 1) and the second language (L2) English (Experiment 2). It was found that, despite the different lexical encodings, early sensorimotor simulation plays a role in temporal sequencing processing in both L1 Chinese and L2 English. The findings well support the embodied theory that conceptual knowledge is grounded in sensory-motor systems (Gallese and Lakoff, Cogn Neuropsychol 22:455-479, 2005). Additionally, in both languages, neural representations during comprehending temporal sequencing and spatial ordering are different. The time-spatial relationship is asymmetric, in that space schema could be imported into temporal sequence processing but not vice versa. These findings support the weak view of the Metaphoric Mapping Theory.

  13. Neurobehavioral mechanisms of temporal processing deficits in Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Deborah L Harrington

    Full Text Available BACKGROUND: Parkinson's disease (PD disrupts temporal processing, but the neuronal sources of deficits and their response to dopamine (DA therapy are not understood. Though the striatum and DA transmission are thought to be essential for timekeeping, potential working memory (WM and executive problems could also disrupt timing. METHODOLOGY/FINDINGS: The present study addressed these issues by testing controls and PD volunteers 'on' and 'off' DA therapy as they underwent fMRI while performing a time-perception task. To distinguish systems associated with abnormalities in temporal and non-temporal processes, we separated brain activity during encoding and decision-making phases of a trial. Whereas both phases involved timekeeping, the encoding and decision phases emphasized WM and executive processes, respectively. The methods enabled exploration of both the amplitude and temporal dynamics of neural activity. First, we found that time-perception deficits were associated with striatal, cortical, and cerebellar dysfunction. Unlike studies of timed movement, our results could not be attributed to traditional roles of the striatum and cerebellum in movement. Second, for the first time we identified temporal and non-temporal sources of impaired time perception. Striatal dysfunction was found during both phases consistent with its role in timekeeping. Activation was also abnormal in a WM network (middle-frontal and parietal cortex, lateral cerebellum during encoding and a network that modulates executive and memory functions (parahippocampus, posterior cingulate during decision making. Third, hypoactivation typified neuronal dysfunction in PD, but was sometimes characterized by abnormal temporal dynamics (e.g., lagged, prolonged that were not due to longer response times. Finally, DA therapy did not alleviate timing deficits. CONCLUSIONS/SIGNIFICANCE: Our findings indicate that impaired timing in PD arises from nigrostriatal and mesocortical dysfunction

  14. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  15. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    . It was shown that most observations in the measured consonant discrimination error patterns were predicted by the model, although error rates were systematically underestimated by the model in few particular acoustic-phonetic features. These results reflect a relation between basic auditory processing deficits....... It was shown that an accurate simulation of cochlear input-output functions, in addition to the audiogram, played a major role in accounting both for sensitivity and supra-threshold processing. Finally, the model was used as a front-end in a framework developed to predict consonant discrimination...... and reduced speech perception performance in the listeners with cochlear hearing loss. Overall, this work suggests a possible explanation of the variability in consequences of cochlear hearing loss. The proposed model might be an interesting tool for, e.g., evaluation of hearing-aid signal processing....

  16. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  17. Behavioral Signs of (Central) Auditory Processing Disorder in Children With Nonsyndromic Cleft Lip and/or Palate: A Parental Questionnaire Approach.

    Science.gov (United States)

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-03-01

    Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an

  18. Multimodal imaging of temporal processing in typical and atypical language development.

    Science.gov (United States)

    Kovelman, Ioulia; Wagley, Neelima; Hay, Jessica S F; Ugolini, Margaret; Bowyer, Susan M; Lajiness-O'Neill, Renee; Brennan, Jonathan

    2015-03-01

    New approaches to understanding language and reading acquisition propose that the human brain's ability to synchronize its neural firing rate to syllable-length linguistic units may be important to children's ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English-speaking children (aged 6-12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between children's amplitude rise-time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brain's mechanisms that support language and reading acquisition during both typical and atypical development.

  19. Processing of harmonics in the lateral belt of macaque auditory cortex.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P

    2014-01-01

    Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations ("coos"). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB. PMID:25100935

  20. IMPAIRED PROCESSING IN THE PRIMARY AUDITORY CORTEX OF AN ANIMAL MODEL OF AUTISM

    Directory of Open Access Journals (Sweden)

    Renata eAnomal

    2015-11-01

    Full Text Available Autism is a neurodevelopmental disorder clinically characterized by deficits in communication, lack of social interaction and, repetitive behaviors with restricted interests. A number of studies have reported that sensory perception abnormalities are common in autistic individuals and might contribute to the complex behavioral symptoms of the disorder. In this context, hearing incongruence is particularly prevalent. Considering that some of this abnormal processing might stem from the unbalance of inhibitory and excitatory drives in brain circuitries, we used an animal model of autism induced by valproic acid (VPA during pregnancy in order to investigate the tonotopic organization of the primary auditory cortex (AI and its local inhibitory circuitry. Our results show that VPA rats have distorted primary auditory maps with over-representation of high frequencies, broadly tuned receptive fields and higher sound intensity thresholds as compared to controls. However, we did not detect differences in the number of parvalbumin-positive interneurons in AI of VPA and control rats. Altogether our findings show that neurophysiological impairments of hearing perception in this autism model occur independently of alterations in the number of parvalbumin-expressing interneurons. These data support the notion that fine circuit alterations, rather than gross cellular modification, could lead to neurophysiological changes in the autistic brain.

  1. Developmental Dyslexia: Exploring How Much Phonological and Visual Attention Span Disorders Are Linked to Simultaneous Auditory Processing Deficits

    Science.gov (United States)

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were…

  2. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  3. Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus.

    Science.gov (United States)

    Foo, Francine; King-Stephens, David; Weber, Peter; Laxer, Kenneth; Parvizi, Josef; Knight, Robert T

    2016-01-01

    The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals directly from the lateral surface of either the left or right temporal lobe of eight patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz) frequency range within the superior temporal gyrus (STG) and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75 and 200 ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords, and a positive correlation between changes in γhigh power and the degree of stimulus roughness was observed in both hemispheres. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity toward dissonance. PMID:27148011

  4. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    Science.gov (United States)

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans.

  5. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  6. On spatio-temporal Lévy based Cox processes

    DEFF Research Database (Denmark)

    Prokesova, Michaela; Hellmund, Gunnar; Jensen, Eva Bjørn Vedel

    2006-01-01

    The paper discusses a new class of models for spatio-temporal Cox point processes. In these models, the driving field is defined by means of an integral of a weight function with respect to a Lévy basis. The relations to other Cox process models studied previously are discussed and formulas for t...

  7. MODIS multi-temporal data retrieval and processing toolbox

    NARCIS (Netherlands)

    Mattiuzzi, M.; Verbesselt, J.; Klisch, A.

    2012-01-01

    The package functionalities are focused for the download and processing of multi-temporal datasets from MODIS sensors. All standard MODIS grid data can be accessed and processed by the package routines. The package is still in alpha development and not all the functionalities are available for now.

  8. The effect of spectrally and temporally altered auditory feedback on speech intonation by hard of hearing listeners

    Science.gov (United States)

    Barac-Cikoja, Dragana; Tamaki, Chizuko; Thomas, Lannie

    2003-04-01

    Eight listeners with severe to profound hearing loss read a six-sentence passage under spectrally altered and/or delayed auditory feedback. Spectral manipulation was implemented by filtering the speech signal into either one or four frequency bands, extracting respective amplitude envelope(s), and amplitude-modulating the corresponding noise band(s). Thus, the resulting auditory feedback did not preserve intonation information, although the four-band noise signal remained intelligible. The two noise conditions and the unaltered speech were each tested under the simultaneous and three delayed (50 ms, 100 ms, 200 ms) feedback conditions. Auditory feedback was presented via insert earphones at the listener's most comfortable level. Recorded speech was analyzed for the form and domain of the fundamental frequency (f0) declination, the magnitude of the sentence initial f0 peak (P1), and the fall-rise pattern of f0 at the phrasal boundaries. A significant interaction between the two feedback manipulations was found. Intonation characteristics were affected by speech delay only under the spectrally unaltered feedback: The magnitude of P1 and the slope of the f0 topline both increased with the delay. The spectral smearing diminished the fall-rise pattern within a sentence. Individual differences in the magnitude of these effects were significant.

  9. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    OpenAIRE

    Harper, NS; Willmore, BDB; Schnupp, JWH; King, AJ; Schoppe, O

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently- heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet, current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here, we present a model of neural responses in the ferret auditory cortex (the I...

  10. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  11. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  12. Auditory processing in high-functioning adolescents with Autism Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Anne-Marie R DePape

    Full Text Available Autism Spectrum Disorder (ASD is a pervasive developmental disorder including abnormalities in perceptual processing. We measure perception in a battery of tests across speech (filtering, phoneme categorization, multisensory integration and music (pitch memory, meter categorization, harmonic priming. We found that compared to controls, the ASD group showed poorer filtering, less audio-visual integration, less specialization for native phonemic and metrical categories, and a higher instance of absolute pitch. No group differences were found in harmonic priming. Our results are discussed in a developmental framework where culture-specific knowledge acquired early compared to late in development is most impaired, perhaps because of early-accelerated brain growth in ASD. These results suggest that early auditory remediation is needed for good communication and social functioning.

  13. Assessment of auditory sensory processing in a neurodevelopmental animal model of schizophrenia-Gating of auditory-evoked potentials and prepulse inhibition

    DEFF Research Database (Denmark)

    Broberg, Brian Villumsen; Oranje, Bob; Yding, Birte;

    2010-01-01

    of sensory information processing seen in schizophrenia patients, can be assessed by highly homologues methods in both humans and rodents, evident by the prepulse inhibition (PPI) of the auditory startle response and the P50 (termed P1 here) suppression paradigms. Treatment with the NMDA receptor antagonist...... findings confirm measures of early information processing to show high resemblance between rodents and humans, and indicate that early postnatal PCP-treated rats show deficits in pre-attentional processing, which are distinct from those observed in schizophrenia patients.......The use of translational approaches to validate animal models is needed for the development of treatments that can effectively alleviate cognitive impairments associated with schizophrenia, which are unsuccessfully treated by the current available therapies. Deficits in pre-attentive stages...

  14. Predictive Power of Attention and Reading Readiness Variables on Auditory Reasoning and Processing Skills of Six-Year-Old Children

    Science.gov (United States)

    Erbay, Filiz

    2013-01-01

    The aim of present research was to describe the relation of six-year-old children's attention and reading readiness skills (general knowledge, word comprehension, sentences, and matching) with their auditory reasoning and processing skills. This was a quantitative study based on scanning model. Research sampling consisted of 204 kindergarten…

  15. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    Science.gov (United States)

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  16. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  17. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  18. Temporal processing deficits in letter-by-letter reading.

    Science.gov (United States)

    Ingles, Janet L; Eskes, Gail A

    2007-01-01

    Theories of the cognitive impairment underlying letter-by-letter reading vary widely, including prelexical and lexical level deficits. One prominent prelexical account proposes that the disorder results from difficulty in processing multiple letters simultaneously. We investigated whether this deficit extends to letters presented in rapid temporal succession. A letter-by-letter reader, G.M., was administered a rapid serial visual presentation task that has been used widely to study the temporal processing characteristics of the normal visual system. Comparisons were made to a control group of 6 brain-damaged individuals without reading deficits. Two target letters were embedded at varying temporal positions in a stream of rapidly presented single digits. After each stream, the identities of the two letters were reported. G.M. required an extended period of time after he had processed one letter before he was able to reliably identify a second letter, relative to the controls. In addition, G.M.'s report of the second letter was most impaired when it immediately followed the first letter, a pattern not seen in the controls, indicating that G.M. had difficulty processing the two items together. These data suggest that a letter-by-letter reading strategy may be adopted to help compensate for a deficit in the temporal processing of letters.

  19. Temporal event-structure coding in developmental dyslexia: Evidence from explicit and implicit temporal processes

    Directory of Open Access Journals (Sweden)

    Elliott Mark A.

    2010-01-01

    Full Text Available As an alternative to theories positing visual or phonological deficits it has been suggested that the aetiology of dyslexia takes the form of a temporal processing deficit that may refer to impairment in the functional connectivity of the processes involved in reading. Here we investigated this idea in an experimental task designed to measure simultaneity thresholds. Fifteen children diagnosed with developmental dyslexia, alongside a matched sample of 13 normal readers undertook a series of threshold determination procedures designed to locate visual simultaneity thresholds and to assess the influence of subthreshold synchrony or asynchrony upon these thresholds. While there were no significant differences in simultaneity thresholds between dyslexic and normal readers, indicating no evidence of an altered perception, or temporal quantization of events, the dyslexic readers reported simultaneity significantly less frequently than normal readers, with the reduction largely attributable presentation of a subthreshold asynchrony. The results are discussed in terms of a whole systems approach to maintaining information processing integrity.

  20. Frequency processing at consecutive levels in the auditory system of bush crickets (tettigoniidae).

    Science.gov (United States)

    Ostrowski, Tim Daniel; Stumpner, Andreas

    2010-08-01

    We asked how processing of male signals in the auditory pathway of the bush cricket Ancistrura nigrovittata (Phaneropterinae, Tettigoniidae) changes from the ear to the brain. From 37 sensory neurons in the crista acustica single elements (cells 8 or 9) have frequency tuning corresponding closely to the behavioral tuning of the females. Nevertheless, one-quarter of sensory neurons (approximately cells 9 to 18) excite the ascending neuron 1 (AN1), which is best tuned to the male's song carrier frequency. AN1 receives frequency-dependent inhibition, reducing sensitivity especially in the ultrasound. When recorded in the brain, AN1 shows slightly lower overall activity than when recorded in the prothoracic ganglion close to the spike-generating zone. This difference is significant in the ultrasonic range. The first identified local brain neuron in a bush cricket (LBN1) is described. Its dendrites overlap with some of AN1-terminations in the brain. Its frequency tuning and intensity dependence strongly suggest a direct postsynaptic connection to AN1. Spiking in LBN1 is only elicited after summation of excitatory postsynaptic potentials evoked by individual AN1-action potentials. This serves a filtering mechanism that reduces the sensitivity of LBN1 and also its responsiveness to ultrasound as compared to AN1. Consequently, spike latencies of LBN1 are long (>30 ms) despite its being a second-order interneuron. Additionally, LBN1 receives frequency-specific inhibition, most likely further reducing its responses to ultrasound. This demonstrates that frequency-specific inhibition is redundant in two directly connected interneurons on subsequent levels in the auditory system. PMID:20533362

  1. Frequency processing at consecutive levels in the auditory system of bush crickets (tettigoniidae).

    Science.gov (United States)

    Ostrowski, Tim Daniel; Stumpner, Andreas

    2010-08-01

    We asked how processing of male signals in the auditory pathway of the bush cricket Ancistrura nigrovittata (Phaneropterinae, Tettigoniidae) changes from the ear to the brain. From 37 sensory neurons in the crista acustica single elements (cells 8 or 9) have frequency tuning corresponding closely to the behavioral tuning of the females. Nevertheless, one-quarter of sensory neurons (approximately cells 9 to 18) excite the ascending neuron 1 (AN1), which is best tuned to the male's song carrier frequency. AN1 receives frequency-dependent inhibition, reducing sensitivity especially in the ultrasound. When recorded in the brain, AN1 shows slightly lower overall activity than when recorded in the prothoracic ganglion close to the spike-generating zone. This difference is significant in the ultrasonic range. The first identified local brain neuron in a bush cricket (LBN1) is described. Its dendrites overlap with some of AN1-terminations in the brain. Its frequency tuning and intensity dependence strongly suggest a direct postsynaptic connection to AN1. Spiking in LBN1 is only elicited after summation of excitatory postsynaptic potentials evoked by individual AN1-action potentials. This serves a filtering mechanism that reduces the sensitivity of LBN1 and also its responsiveness to ultrasound as compared to AN1. Consequently, spike latencies of LBN1 are long (>30 ms) despite its being a second-order interneuron. Additionally, LBN1 receives frequency-specific inhibition, most likely further reducing its responses to ultrasound. This demonstrates that frequency-specific inhibition is redundant in two directly connected interneurons on subsequent levels in the auditory system.

  2. Spatial and Temporal Features of Superordinate Semantic Processing Studied with fMRI and EEG.

    Directory of Open Access Journals (Sweden)

    Michelle E Costanzo

    2013-07-01

    Full Text Available The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization – the extraction of general features shared by broad classes of exemplars (e.g. living vs. non-living semantic categories. We proposed that, because of the abstract nature, of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization - specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP with functional magnetic resonance imaging (fMRI to characterize subjects’ responses as they made superordinate categorical decisions (living vs. nonliving about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.  

  3. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  4. Lateralization of auditory-cortex functions.

    Science.gov (United States)

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding. PMID:14629926

  5. Effects of parietal TMS on visual and auditory processing at the primary cortical level -- a concurrent TMS-fMRI study.

    Science.gov (United States)

    Leitão, Joana; Thielscher, Axel; Werner, Sebastian; Pohmann, Rolf; Noppeney, Uta

    2013-04-01

    Accumulating evidence suggests that multisensory interactions emerge already at the primary cortical level. Specifically, auditory inputs were shown to suppress activations in visual cortices when presented alone but amplify the blood oxygen level-dependent (BOLD) responses to concurrent visual inputs (and vice versa). This concurrent transcranial magnetic stimulation-functional magnetic resonance imaging (TMS-fMRI) study applied repetitive TMS trains at no, low, and high intensity over right intraparietal sulcus (IPS) and vertex to investigate top-down influences on visual and auditory cortices under 3 sensory contexts: visual, auditory, and no stimulation. IPS-TMS increased activations in auditory cortices irrespective of sensory context as a result of direct and nonspecific auditory TMS side effects. In contrast, IPS-TMS modulated activations in the visual cortex in a state-dependent fashion: it deactivated the visual cortex under no and auditory stimulation but amplified the BOLD response to visual stimulation. However, only the response amplification to visual stimulation was selective for IPS-TMS, while the deactivations observed for IPS- and Vertex-TMS resulted from crossmodal deactivations induced by auditory activity to TMS sounds. TMS to IPS may increase the responses in visual (or auditory) cortices to visual (or auditory) stimulation via a gain control mechanism or crossmodal interactions. Collectively, our results demonstrate that understanding TMS effects on (uni)sensory processing requires a multisensory perspective.

  6. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    Science.gov (United States)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters

  7. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  8. Systematic Review of the Effectiveness of Frequency Modulation Devices in Improving Academic Outcomes in Children With Auditory Processing Difficulties.

    Science.gov (United States)

    Reynolds, Stacey; Miller Kuhaneck, Heather; Pfeiffer, Beth

    2016-01-01

    This systematic review describes the published evidence related to the effectiveness of frequency modulation (FM) devices in improving academic outcomes in children with auditory processing difficulties. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses standards were used to identify articles published between January 2003 and March 2014. The Cochrane Population, Intervention, Control, Outcome, Study Design approach and the American Occupational Therapy Association process forms were used to guide the article selection and evaluation process. Of the 83 articles screened, 7 matched the systematic review inclusion criteria. Findings were consistently positive, although limitations were identified. Results of this review indicate moderate support for the use of FM devices to improve children's ability to listen and attend in the classroom and mixed evidence to improve specific academic performance areas. FM technology should be considered for school-age children with auditory processing impairments who are receiving occupational therapy services to improve functioning in the school setting. PMID:26709423

  9. Differential bilateral involvement of the parietal gyrus during predicative metaphor processing: an auditory fMRI study.

    Science.gov (United States)

    Obert, Alexandre; Gierski, Fabien; Calmus, Arnaud; Portefaix, Christophe; Declercq, Christelle; Pierot, Laurent; Caillies, Stéphanie

    2014-10-01

    Despite the growing literature on figurative language processing, there is still debate as to which cognitive processes and neural bases are involved. Furthermore, most studies have focused on nominal metaphor processing without any context, and very few have used auditory presentation. We therefore investigated the neural bases of the comprehension of predicative metaphors presented in a brief context, in an auditory, ecological way. The comprehension of their literal counterparts served as a control condition. We also investigated the link between working memory and verbal skills and regional activation. Comparisons of metaphorical and literal conditions revealed bilateral activation of parietal areas including the left angular (lAG) and right inferior parietal gyri (rIPG) and right precuneus. Only verbal skills were associated with lAG (but not rIPG) activation. These results indicated that predicative metaphor comprehension share common activations with other metaphors. Furthermore, individual verbal skills could have an impact on figurative language processing.

  10. Assessment of auditory sensory processing in a neurodevelopmental animal model of schizophrenia-Gating of auditory-evoked potentials and prepulse inhibition

    DEFF Research Database (Denmark)

    Broberg, Brian Villumsen; Oranje, Bob; Yding, Birte;

    2010-01-01

    The use of translational approaches to validate animal models is needed for the development of treatments that can effectively alleviate cognitive impairments associated with schizophrenia, which are unsuccessfully treated by the current available therapies. Deficits in pre-attentive stages...... of sensory information processing seen in schizophrenia patients, can be assessed by highly homologues methods in both humans and rodents, evident by the prepulse inhibition (PPI) of the auditory startle response and the P50 (termed P1 here) suppression paradigms. Treatment with the NMDA receptor antagonist...... PCP on postnatal days 7, 9, and 11 reliably induce cognitive impairments resembling those presented by schizophrenia patients. Here we evaluate the potential of early postnatal PCP (20mg/kg) treatment in Lister Hooded rats to induce post-pubertal deficits in PPI and changes, such as reduced gating...

  11. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory.

    Science.gov (United States)

    Bigand, Emmanuel; Delbé, Charles; Poulin-Charronnat, Bénédicte; Leman, Marc; Tillmann, Barbara

    2014-01-01

    During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.

  12. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory.

    Science.gov (United States)

    Bigand, Emmanuel; Delbé, Charles; Poulin-Charronnat, Bénédicte; Leman, Marc; Tillmann, Barbara

    2014-01-01

    During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds. PMID:24936174

  13. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory

    Directory of Open Access Journals (Sweden)

    Emmanuel eBigand

    2014-06-01

    Full Text Available During the last decade, it has been argued that 1 music processing involves syntactic representations similar to those observed in language, and 2 that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.

  14. Temporal and Location Based RFID Event Data Management and Processing

    Science.gov (United States)

    Wang, Fusheng; Liu, Peiya

    Advance of sensor and RFID technology provides significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management. RFID data are temporal and history oriented, multi-dimensional, and carrying implicit semantics. Moreover, RFID applications are heterogeneous. RFID data management or data warehouse systems need to support generic and expressive data modeling for tracking and monitoring physical objects, and provide automated data interpretation and processing. We develop a powerful temporal and location oriented data model for modeling and queryingRFID data, and a declarative event and rule based framework for automated complex RFID event processing. The approach is general and can be easily adapted for different RFID-enabled applications, thus significantly reduces the cost of RFID data integration.

  15. Research on Process-oriented Spatio-temporal Data Model

    Directory of Open Access Journals (Sweden)

    XUE Cunjin

    2016-02-01

    Full Text Available According to the analysis of the present status and existing problems of spatio-temporal data models developed in last 20 years,this paper proposes a process-oriented spatio-temporal data model (POSTDM,aiming at representing,organizing and storing continuity and gradual geographical entities. The dynamic geographical entities are graded and abstracted into process objects series from their intrinsic characteristics,which are process objects,process stage objects,process sequence objects and process state objects. The logical relationships among process entities are further studied and the structure of UML models and storage are also designed. In addition,through the mechanisms of continuity and gradual changes impliedly recorded by process objects,and the modes of their procedure interfaces offered by the customized ObjcetStorageTable,the POSTDM can carry out process representation,storage and dynamic analysis of continuity and gradual geographic entities. Taking a process organization and storage of marine data as an example,a prototype system (consisting of an object-relational database and a functional analysis platform is developed for validating and evaluating the model's practicability.

  16. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  17. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    Science.gov (United States)

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts. PMID:25538583

  18. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder.

    Science.gov (United States)

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  19. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder

    Science.gov (United States)

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  20. To modulate and be modulated: estrogenic influences on auditory processing of communication signals within a socio-neuro-endocrine framework.

    Science.gov (United States)

    Yoder, Kathleen M; Vicario, David S

    2012-02-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1) Local estradiol action within an auditory area is necessary for socially relevant sounds to induce normal physiological responses in the brains of both sexes; 2) These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3) Estradiol action within the auditory forebrain enables behavioral discrimination among socially relevant sounds in males; and 4) Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. PMID:22201281

  1. Auditory cortical processing: Binaural interaction in healthy and ROBO1-deficient subjects

    OpenAIRE

    LamminmÀki, Satu

    2012-01-01

    Two functioning ears provide clear advantages over monaural listening. During natural binaural listening, robust brain-level interaction occurs between the slightly different inputs from the left and the right ear. Binaural interaction requires convergence of inputs from the two ears somewhere in the auditory system, and it therefore relies on midline crossing of auditory pathways, a fundamental property of the mammalian central nervous system. Binaural interaction plays a significant ro...

  2. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.

  3. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. PMID:25291405

  4. Interhemispheric Connectivity Influences the Degree of Modulation of TMS-Induced Effects during Auditory Processing.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2011-01-01

    Repetitive transcranial magnetic stimulation (rTMS) has been shown to interfere with many components of language processing, including semantic, syntactic, and phonologic. However, not much is known about its effects on nonlinguistic auditory processing, especially its action on Heschl's gyrus (HG). We aimed to investigate the behavioral and neural basis of rTMS during a melody processing task, while targeting the left HG, the right HG, and the Vertex as a control site. Response times (RT) were normalized relative to the baseline-rTMS (Vertex) and expressed as percentage change from baseline (%RT change). We also looked at sex differences in rTMS-induced response as well as in functional connectivity during melody processing using rTMS and functional magnetic resonance imaging (fMRI). fMRI results showed an increase in the right HG compared with the left HG during the melody task, as well as sex differences in functional connectivity indicating a greater interhemispheric connectivity between left and right HG in females compared with males. TMS results showed that 10 Hz-rTMS targeting the right HG induced differential effects according to sex, with a facilitation of performance in females and an impairment of performance in males. We also found a differential correlation between the %RT change after 10 Hz-rTMS targeting the right HG and the interhemispheric functional connectivity between right and left HG, indicating that an increase in interhemispheric functional connectivity was associated with a facilitation of performance. This is the first study to report a differential rTMS-induced interference with melody processing depending on sex. In addition, we showed a relationship between the interference induced by rTMS on behavioral performance and the neural activity in the network connecting left and right HG, suggesting that the interhemispheric functional connectivity could determine the degree of modulation of behavioral performance.

  5. Interhemispheric connectivity influences the degree of modulation of TMS-induced effects during auditory processing

    Directory of Open Access Journals (Sweden)

    Jamila eAndoh

    2011-07-01

    Full Text Available Repetitive TMS (rTMS has been shown to interfere with many components of language processing, including semantic, syntactic and phonologic. However, not much is known about its effects on primary auditory processing, especially its action on Heschl’s gyrus (HG. We aimed to investigate the behavioural and neural basis of rTMS during a melody processing task, while targeting the left HG, the right HG and the Vertex as a control site. Response Times (RT were normalized relative to the baseline-rTMS (Vertex and expressed as percentage change from baseline (%RT change. We also looked at sex differences in rTMS-induced response as well as in functional connectivity during melody processing using rTMS and functional Magnetic Resonance Imaging (fMRI.Functional MRI results showed an increase in the right HG compared with the left HG during the melody task, as well as sex differences in functional connectivity indicating a greater interhemispheric connectivity between left and right HG in females compared with males. TMS results showed that 10Hz-rTMS targeting the right HG induced differential effects according to sex, with a facilitation of performance in females and an impairment of performance in males. We also found a differential correlation between the %RT change after 10Hz-rTMS targeting the right HG and the interhemispheric functional connectivity between right and left HG, indicating that an increase in interhemispheric functional connectivity was associated with a facilitation of performance. This is the first study to report a differential rTMS-induced interference with melody processing depending on sex. In addition, we showed a relationship between the interference induced by rTMS on behavioral performance and the neural activity in the network connecting left and right HG, suggesting that the interhemispheric functional connectivity could determine the degree of modulation of behavioral performance.

  6. A Model of Auditory-Cognitive Processing and Relevance to Clinical Applicability.

    Science.gov (United States)

    Edwards, Brent

    2016-01-01

    Hearing loss and cognitive function interact in both a bottom-up and top-down relationship. Listening effort is tied to these interactions, and models have been developed to explain their relationship. The Ease of Language Understanding model in particular has gained considerable attention in its explanation of the effect of signal distortion on speech understanding. Signal distortion can also affect auditory scene analysis ability, however, resulting in a distorted auditory scene that can affect cognitive function, listening effort, and the allocation of cognitive resources. These effects are explained through an addition to the Ease of Language Understanding model. This model can be generalized to apply to all sounds, not only speech, representing the increased effort required for auditory environmental awareness and other nonspeech auditory tasks. While the authors have measures of speech understanding and cognitive load to quantify these interactions, they are lacking measures of the effect of hearing aid technology on auditory scene analysis ability and how effort and attention varies with the quality of an auditory scene. Additionally, the clinical relevance of hearing aid technology on cognitive function and the application of cognitive measures in hearing aid fittings will be limited until effectiveness is demonstrated in real-world situations. PMID:27355775

  7. Dynamic temporal signal processing in the inferior colliculus of echolocating bats

    Science.gov (United States)

    Jen, Philip H.-S.; Wu, Chung Hsin; Wang, Xin

    2012-01-01

    In nature, communication sounds among animal species including humans are typical complex sounds that occur in sequence and vary with time in several parameters including amplitude, frequency, duration as well as separation, and order of individual sounds. Among these multiple parameters, sound duration is a simple but important one that contributes to the distinct spectral and temporal attributes of individual biological sounds. Likewise, the separation of individual sounds is an important temporal attribute that determines an animal's ability in distinguishing individual sounds. Whereas duration selectivity of auditory neurons underlies an animal's ability in recognition of sound duration, the recovery cycle of auditory neurons determines a neuron's ability in responding to closely spaced sound pulses and therefore, it underlies the animal's ability in analyzing the order of individual sounds. Since the multiple parameters of naturally occurring communication sounds vary with time, the analysis of a specific sound parameter by an animal would be inevitably affected by other co-varying sound parameters. This is particularly obvious in insectivorous bats, which rely on analysis of returning echoes for prey capture when they systematically vary the multiple pulse parameters throughout a target approach sequence. In this review article, we present our studies of dynamic variation of duration selectivity and recovery cycle of neurons in the central nucleus of the inferior colliculus of the frequency-modulated bats to highlight the dynamic temporal signal processing of central auditory neurons. These studies use single pulses and three biologically relevant pulse-echo (P-E) pairs with varied duration, gap, and amplitude difference similar to that occurring during search, approach, and terminal phases of hunting by bats. These studies show that most collicular neurons respond maximally to a best tuned sound duration (BD). The sound duration to which these neurons are

  8. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  9. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  10. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  11. A corollary discharge mechanism modulates central auditory processing in singing crickets.

    Science.gov (United States)

    Poulet, J F A; Hedwig, B

    2003-03-01

    Crickets communicate using loud (100 dB SPL) sound signals that could adversely affect their own auditory system. To examine how they cope with this self-generated acoustic stimulation, intracellular recordings were made from auditory afferent neurons and an identified auditory interneuron-the Omega 1 neuron (ON1)-during pharmacologically elicited singing (stridulation). During sonorous stridulation, the auditory afferents and ON1 responded with bursts of spikes to the crickets' own song. When the crickets were stridulating silently, after one wing had been removed, only a few spikes were recorded in the afferents and ON1. Primary afferent depolarizations (PADs) occurred in the terminals of the auditory afferents, and inhibitory postsynaptic potentials (IPSPs) were apparent in ON1. The PADs and IPSPs were composed of many summed, small-amplitude potentials that occurred at a rate of about 230 Hz. The PADs and the IPSPs started during the closing wing movement and peaked in amplitude during the subsequent opening wing movement. As a consequence, during silent stridulation, ON1's response to acoustic stimuli was maximally inhibited during wing opening. Inhibition coincides with the time when ON1 would otherwise be most strongly excited by self-generated sounds in a sonorously stridulating cricket. The PADs and the IPSPs persisted in fictively stridulating crickets whose ventral nerve cord had been isolated from muscles and sense organs. This strongly suggests that the inhibition of the auditory pathway is the result of a corollary discharge from the stridulation motor network. The central inhibition was mimicked by hyperpolarizing current injection into ON1 while it was responding to a 100 dB SPL sound pulse. This suppressed its spiking response to the acoustic stimulus and maintained its response to subsequent, quieter stimuli. The corollary discharge therefore prevents auditory desensitization in stridulating crickets and allows the animals to respond to external

  12. Avaliação do processamento auditivo na Neurofibromatose tipo 1 Auditory processing evaluation in Neurofibromatosis type 1

    Directory of Open Access Journals (Sweden)

    Pollyanna Barros Batista

    2010-12-01

    Full Text Available Este trabalho teve como objetivo apresentar os resultados obtidos na avaliação do processamento auditivo de um paciente com Neurofibromatose tipo 1. Embora a audição periférica estivesse normal nos testes realizados, foram observadas alterações importantes no processamento auditivo em várias habilidades. Este achado, descrito pela primeira vez na neurofibromatose, pode contribuir para explicar os distúrbios cognitivos e da aprendizagem já amplamente descritos nesta enfermidade genética comum.The aim of this study was to present the results obtained in the auditory processing evaluation of a patient with neurofibromatosis type 1. Although the patient presented normal peripheral hearing, auditory processing deficits were identified in several abilities. This finding, described for the first time in neurofibromatosis, might help to explain the cognitive and learning disabilities broadly described for this common genetic disorder.

  13. Participação do cerebelo no processamento auditivo Participation of the cerebellum in auditory processing

    Directory of Open Access Journals (Sweden)

    Patrícia Maria Sens

    2007-04-01

    Full Text Available O cerebelo era tradicionalmente visto como um órgão coordenador da motricidade, entretanto é atualmente considerado como um importante centro de integração de sensibilidades e coordenação de várias fases do processo cognitivo. OBJETIVO: é sistematizar as informações da literatura quanto à participação do cerebelo na percepção auditiva. MÉTODOS: foram selecionados na literatura trabalhos em animais sobre a fisiologia e anatomia das vias auditivas do cerebelo, além de trabalhos em humanos sobre diversas funções do cerebelo na percepção auditiva. Foram discutidos os achados da literatura, que há evidências que o cerebelo participa das seguintes funções cognitivas relacionadas à audição: geração verbal; processamento auditivo; atenção auditiva; memória auditiva; raciocínio abstrato; timing; solução de problemas; discriminação sensorial; informação sensorial; processamento da linguagem; operações lingüísticas. CONCLUSÃO: Foi constatado que são incompletas as informações sobre as estruturas, funções e vias auditivas do cerebelo.The cerebellum, traditionally conceived as a controlling organ of motricity, it is today considered an all-important integration center for both sensitivity and coordination of the various phases of the cognitive process. AIM: This paper aims at gather and sort literature information on the cerebellum’s role in the auditory perception. METHODS: We have selected animal studies of both the physiology and the anatomy of the cerebellum auditory pathway, as well as papers on humans discussing several functions of the cerebellum in auditory perception. As for the literature, it has been discussed and concluded that there is evidence that the cerebellum participates in many cognitive functions related to hearing: speech generation, auditory processing, auditory memory, abstract reasoning, timing, solution of problems, sensorial discrimination, sensorial information, language

  14. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults.

    Science.gov (United States)

    Anderson, Samira; Jenkins, Kimberly

    2015-11-01

    Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912

  15. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  16. Temporal cortex reflects effects of sentence context on phonetic processing.

    Science.gov (United States)

    Guediche, Sara; Salvata, Caden; Blumstein, Sheila E

    2013-05-01

    Listeners' perception of acoustically presented speech is constrained by many different sources of information that arise from other sensory modalities and from more abstract higher-level language context. An open question is how perceptual processes are influenced by and interact with these other sources of information. In this study, we use fMRI to examine the effect of a prior sentence fragment meaning on the categorization of two possible target words that differ in an acoustic phonetic feature of the initial consonant, VOT. Specifically, we manipulate the bias of the sentence context (biased, neutral) and the target type (ambiguous, unambiguous). Our results show that an interaction between these two factors emerged in a cluster in temporal cortex encompassing the left middle temporal gyrus and the superior temporal gyrus. The locus and pattern of these interactions support an interactive view of speech processing and suggest that both the quality of the input and the potential bias of the context together interact and modulate neural activation patterns. PMID:23281778

  17. Acoustic processing of temporally modulated sounds in infants: evidence from a combined near-infrared spectroscopy and EEG study

    Directory of Open Access Journals (Sweden)

    Silke eTelkemeyer

    2011-04-01

    Full Text Available Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG and near-infrared spectroscopy (NIRS. NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language

  18. Characterizing auditory processing and perception in individual listeners with sensorineural hearing loss

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    2011-01-01

    –438 (2008)] was used as a framework. The parameters of the cochlear processing stage of the model were adjusted to account for behaviorally estimated individual basilar-membrane inputoutput functions and the audiogram, from which the amounts of inner hair-cell and outer hair-cell losses were estimated......This study considered consequences of sensorineural hearing loss in ten listeners. The characterization of individual hearing loss was based on psychoacoustic data addressing audiometric pure-tone sensitivity, cochlear compression, frequency selectivity, temporal resolution, and intensity...

  19. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  20. Processamento auditivo de militares expostos a ruído ocupacional Auditory processing of servicemen exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    Carla Cassandra de Souza Santos

    2008-03-01

    Full Text Available OBJETIVO: avaliar o processamento auditivo de militares expostos a ruído ocupacional. MÉTODOS: foram avaliados 41 militares, com exposição a ruído superior a 10 anos, subdivididos em Grupo A (n =16, sem perda auditiva e Grupo B (n = 25, com perda auditiva. Foram realizadas avaliação audiológica básica e testes de processamento auditivo (testes de Fala Filtrada, SSW em Português e de Padrão de Freqüência. RESULTADOS: observou-se altas incidências de alteração de processamento auditivo, especialmente no teste de Fala Filtrada (43,75% e 68% nos grupos A e B, respectivamente e teste de Padrão de Freqüência (68,75% e 48%, nos grupos A e B, respectivamente. O teste SSW não se mostrou eficiente para avaliar as habilidades auditivas centrais de indivíduos expostos a elevados níveis de pressão sonora. CONCLUSÃO: a exposição a ruído ocupacional interfere no processamento auditivo de militares. As alterações na via auditiva central podem ser verificadas independente da presença de alteração auditiva periférica.PURPOSE: to evaluate the auditory processing of military personnel exposed to occupational noise. METHODS: 41 servicemen, exposed to noise for at least 10 years were evaluated, divided into Group A (n= 16, without hearing loss and Group B (n= 25, with hearing loss. The following evaluations were carried through: basic audilogic evaluation and auditory processing tests (low-filtered, SSW and Pitch Pattern Sequence tests. RESULTS: there were high incidences of auditory processing alterations, especially at low-filtered test (43.75% and 68% on groups A e B, respectively and Pitch Pattern Sequence test (68.75% and 48%, on groups A e B, respectively. The SSW test was not efficient to evaluate the central hearing abilities of people exposed to high levels of sound pressure. CONCLUSION: the occupational noise exposure interferes in the auditory processing of military personnel. The alterations on central auditory pathways

  1. Spatial, temporal and spectral pre-processing for colour vision

    OpenAIRE

    van Hateren, J.H.

    1993-01-01

    Fourier transforms of the spectral radiance of natural objects were investigated. The average spectral power spectrum Sc(fc) is well described by Sc(fc) = exp(-βfc), with fc the spectral frequency (cycles µm-1), and β = 0.419±0.097 µm. Average spectral contrast {cc = [Σ(fc≠0)Sc(fc)/Sc(0)]½} was 0.224 ± 0.127. Optimal filters for colour pre-processing were derived using a recently developed theory of early vision. The theory assumes that the surrounding world is first sampled spatially, tempor...

  2. Behavioral Measures of Monaural Temporal Fine Structure Processing

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    Deficits in temporal fine structure (TFS) processing found in hearing-impaired listeners have been shown to correlate poorly to audibility and frequency selectivity, despite adverse effects on speech perception in noise. This underlines the need for an independent measure of TFS processing when...... characterizing hearing impairment. Estimating the acuity of monaural TFS processing in humans however remains a challenge. One suggested measure is based on the ability of listeners to detect a pitch shift between harmonic (H) and inharmonic (I) complex tones with unresolved components (e.g. Moore et al., JASA...... and spectral resolution, for the low pitch evoked by high-frequency complex tones. The aim was to estimate the efficiency of monaural TFS cues as a function of the stimulus center frequency Fc and its ratio N to the stimulus envelope repetition rate. A pitch-matching paradigm was used, such that changes...

  3. Auditory Processing and Language Impairment in Children: Stimulus Considerations for Intervention.

    Science.gov (United States)

    Thal, Donna J.; Barone, Patricia

    1983-01-01

    The performance of language impaired children (four to eight years old) on auditory identification and sequencing tasks which employed different stimuli was studied in two experiments. Results indicated that some children performed significantly better when words rather than tones were used as stimuli.(Author/SEW)

  4. Effects of Methylphenidate on the Auditory Processing Abilities of Children with Attention Deficit-Hyperactivity Disorder.

    Science.gov (United States)

    Keith, Robert W.; Engineer, Parika

    1991-01-01

    Twenty subjects (ages 7-13) with attention deficit hyperactivity disorder were administered a battery of tests (including the Auditory Continuous Performance Test and the Token Test for Children) twice, first when not taking and then when taking methylphenidate. Results indicated significant improvement in performance on all measures when subjects…

  5. Processing of acoustic motion in the auditory cortex of the rufous horseshoe bat, Rhinolophus rouxi

    OpenAIRE

    Firzlaff, Uwe

    2001-01-01

    This study investigated the representation of acoustic motion in different fields of auditory cortex of the rufous horseshoe bat, Rhinolophus rouxi. Motion in horizontal direction (azimuth) was simulated using successive stimuli with dynamically changing interaural intensity differences presented via earphones. The mechanisms underlying a specific sensitivity of neurons to the direction of motion were investigated using microiontophoretic application of γ-aminobutyric acid (GAB...

  6. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  7. Second-order analysis of structured inhomogeneous spatio-temporal point processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    Statistical methodology for spatio-temporal point processes is in its infancy. We consider second-order analysis based on pair correlation functions and K-functions for first general inhomogeneous spatio-temporal point processes and second inhomogeneous spatio-temporal Cox processes. Assuming...... spatio-temporal separability of the intensity function, we clarify different meanings of second-order spatio-temporal separability. One is second-order spatio-temporal independence and relates e.g. to log-Gaussian Cox processes with an additive covariance structure of the underlying spatio......-temporal Gaussian process. Another concerns shot-noise Cox processes with a separable spatio-temporal covariance density. We propose diagnostic procedures for checking hypotheses of second-order spatio-temporal separability, which we apply on simulated and real data (the UK 2001 epidemic foot and mouth disease data)....

  8. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    Full Text Available Multi-electrode array recordings of spike and local field potential (LFP activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs and 492 frequency-tuning curves (FTCs based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm and the 16-40 Hz LFP (7.4 mm, whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz LFP-pair correlations showed that about 16% (9% of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  9. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Science.gov (United States)

    Eggermont, Jos J; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm) and the 16-40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex. PMID:21625385

  10. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Hannah L. Golden

    2015-01-01

    Full Text Available Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13 and age-matched healthy individuals (n = 17 underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  11. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  12. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. PMID:26332785

  13. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  14. Electrical stimulation of the auditory nerve: the coding of frequency, the perception of pitch and the development of cochlear implant speech processing strategies for profoundly deaf people.

    Science.gov (United States)

    Clark, G M

    1996-09-01

    1. The development of speech processing strategies for multiple-channel cochlear implants has depended on encoding sound frequencies and intensities as temporal and spatial patterns of electrical stimulation of the auditory nerve fibres so that speech information of most importance of intelligibility could be transmitted. 2. Initial physiological studies showed that rate encoding of electrical stimulation above 200 pulses/s could not reproduce the normal response patterns in auditory neurons for acoustic stimulation in the speech frequency range above 200 Hz and suggested that place coding was appropriate for the higher frequencies. 3. Rate difference limens in the experimental animal were only similar to those for sound up to 200 Hz. 4. Rate difference limens in implant patients were similar to those obtained in the experimental animal. 5. Satisfactory rate discrimination could be made for durations of 50 and 100 ms, but not 25 ms. This made rate suitable for encoding longer duration suprasegmental speech information, but not segmental information, such as consonants. The rate of stimulation could also be perceived as pitch, discriminated at different electrode sites along the cochlea and discriminated for stimuli across electrodes. 6. Place pitch could be scaled according to the site of stimulation in the cochlea so that a frequency scale was preserved and it also had a different quality from rate pitch and was described as tonality. Place pitch could also be discriminated for the shorter durations (25 ms) required for identifying consonants. 7. The inaugural speech processing strategy encoded the second formant frequencies (concentrations of frequency energy in the mid frequency range of most importance for speech intelligibility) as place of stimulation, the voicing frequency as rate of stimulation and the intensity as current level. Our further speech processing strategies have extracted additional frequency information and coded this as place of stimulation

  15. Enhanced spontaneous functional connectivity of the superior temporal gyrus in early deafness

    OpenAIRE

    Hao Ding; Dong Ming; Baikun Wan; Qiang Li; Wen Qin; Chunshui Yu

    2016-01-01

    Early auditory deprivation may drive the auditory cortex into cross-modal processing of non-auditory sensory information. In a recent study, we had shown that early deaf subjects exhibited increased activation in the superior temporal gyrus (STG) bilaterally during visual spatial working memory; however, the changes in the organization of the STG related spontaneous functional network, and their cognitive relevance are still unknown. To clarify this issue, we applied resting state functional ...

  16. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  17. Spectrotemporal processing differences between auditory cortical fast-spiking and regular-spiking neurons

    OpenAIRE

    Atencio, Craig A.; Schreiner, Christoph E

    2008-01-01

    Excitatory pyramidal neurons and inhibitory interneurons constitute the main elements of cortical circuitry and have distinctive morphologic and electrophysiological properties. Here, we differentiate them by analyzing the time course of their action potentials (APs) and characterizing their receptive field properties in auditory cortex. Pyramidal neurons have longer APs and discharge as Regular-Spiking Units (RSUs), while basket and chandelier cells, which are inhibitory interneurons, have s...

  18. Gender related differences in visual and auditory processing of verbal and figural tasks

    OpenAIRE

    Jaušovec, Norbert; Jaušovec, Ksenija

    2012-01-01

    The aim of the present study was to investigate gender related differences in brain activity for tasks of verbal and figural content presented in the visual and auditory modality. Thirty male and 30 female respondents solved four tasks while their electroencephalogram (EEG) was recorded. Also recorded was the percentage of oxygen saturation of hemoglobin (%StO2) in the respondents' frontal brain areas with near-infrared spectroscopy (NIRS). The main findings of the study can be summarized as ...

  19. Great Expectations: Temporal Expectation Modulates Perceptual Processing Speed

    Science.gov (United States)

    Vangkilde, Signe; Coull, Jennifer T.; Bundesen, Claus

    2012-01-01

    In a crowded dynamic world, temporal expectations guide our attention in time. Prior investigations have consistently demonstrated that temporal expectations speed motor behavior. We explore effects of temporal expectation on "perceptual" speed in three nonspeeded, cued recognition paradigms. Different hazard rate functions for the cue-stimulus…

  20. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability.

    Science.gov (United States)

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-07-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  1. The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation

    Science.gov (United States)

    Kravitz, Dwight J.

    2016-01-01

    Abstract Our remarkable ability to process complex visual scenes is supported by a network of scene-selective cortical regions. Despite growing knowledge about the scene representation in these regions, much less is known about the temporal dynamics with which these representations emerge. We conducted two experiments aimed at identifying and characterizing the earliest markers of scene-specific processing. In the first experiment, human participants viewed images of scenes, faces, and everyday objects while event-related potentials (ERPs) were recorded. We found that the first ERP component to evince a significantly stronger response to scenes than the other categories was the P2, peaking ∼220 ms after stimulus onset. To establish that the P2 component reflects scene-specific processing, in the second experiment, we recorded ERPs while the participants viewed diverse real-world scenes spanning the following three global scene properties: spatial expanse (open/closed), relative distance (near/far), and naturalness (man-made/natural). We found that P2 amplitude was sensitive to these scene properties at both the categorical level, distinguishing between open and closed natural scenes, as well as at the single-image level, reflecting both computationally derived scene statistics and behavioral ratings of naturalness and spatial expanse. Together, these results establish the P2 as an ERP marker for scene processing, and demonstrate that scene-specific global information is available in the neural response as early as 220 ms. PMID:27699208

  2. Spectro-temporal processing of speech – An information-theoretic framework

    DEFF Research Database (Denmark)

    Christiansen, Thomas Ulrich; Dau, Torsten; Greenberg, Steven

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up-to-date...

  3. The temporal characteristics of Ca2+ entry through L-type and T-type Ca2+ channels shape exocytosis efficiency in chick auditory hair cells during development.

    Science.gov (United States)

    Levic, Snezana; Dulon, Didier

    2012-12-01

    During development, synaptic exocytosis by cochlear hair cells is first initiated by patterned spontaneous Ca(2+) spikes and, at the onset of hearing, by sound-driven graded depolarizing potentials. The molecular reorganization occurring in the hair cell synaptic machinery during this developmental transition still remains elusive. We characterized the changes in biophysical properties of voltage-gated Ca(2+) currents and exocytosis in developing auditory hair cells of a precocial animal, the domestic chick. We found that immature chick hair cells (embryonic days 10-12) use two types of Ca(2+) currents to control exocytosis: low-voltage-activating, rapidly inactivating (mibefradil sensitive) T-type Ca(2+) currents and high-voltage-activating, noninactivating (nifedipine sensitive) L-type currents. Exocytosis evoked by T-type Ca(2+) current displayed a fast release component (RRP) but lacked the slow sustained release component (SRP), suggesting an inefficient recruitment of distant synaptic vesicles by this transient Ca(2+) current. With maturation, the participation of L-type Ca(2+) currents to exocytosis largely increased, inducing a highly Ca(2+) efficient recruitment of an RRP and an SRP component. Notably, L-type-driven exocytosis in immature hair cells displayed higher Ca(2+) efficiency when triggered by prerecorded native action potentials than by voltage steps, whereas similar efficiency for both protocols was found in mature hair cells. This difference likely reflects a tighter coupling between release sites and Ca(2+) channels in mature hair cells. Overall, our results suggest that the temporal characteristics of Ca(2+) entry through T-type and L-type Ca(2+) channels greatly influence synaptic release by hair cells during cochlear development.

  4. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson’s Disease

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1–3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion–extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  5. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson’s Disease

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1–3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion–extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments. PMID:27563296

  6. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  7. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors.

    Science.gov (United States)

    Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun

    2015-01-01

    Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production. PMID:26278337

  8. Semantic Processing Impairment in Patients with Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Amanda G. Jaimes-Bautista

    2015-01-01

    Full Text Available The impairment in episodic memory system is the best-known cognitive deficit in patients with temporal lobe epilepsy (TLE. Recent studies have shown evidence of semantic disorders, but they have been less studied than episodic memory. The semantic dysfunction in TLE has various cognitive manifestations, such as the presence of language disorders characterized by defects in naming, verbal fluency, or remote semantic information retrieval, which affects the ability of patients to interact with their surroundings. This paper is a review of recent research about the consequences of TLE on semantic processing, considering neuropsychological, electrophysiological, and neuroimaging findings, as well as the functional role of the hippocampus in semantic processing. The evidence from these studies shows disturbance of semantic memory in patients with TLE and supports the theory of declarative memory of the hippocampus. Functional neuroimaging studies show an inefficient compensatory functional reorganization of semantic networks and electrophysiological studies show a lack of N400 effect that could indicate that the deficit in semantic processing in patients with TLE could be due to a failure in the mechanisms of automatic access to lexicon.

  9. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  10. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  11. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  12. Sex-related differences in auditory processing in adolescents with fetal alcohol spectrum disorder: A magnetoencephalographic study

    Directory of Open Access Journals (Sweden)

    Claudia D. Tesche

    2015-01-01

    Full Text Available Children exposed to substantial amounts of alcohol in utero display a broad range of morphological and behavioral outcomes, which are collectively referred to as fetal alcohol spectrum disorders (FASDs. Common to all children on the spectrum are cognitive and behavioral problems that reflect central nervous system dysfunction. Little is known, however, about the potential effects of variables such as sex on alcohol-induced brain damage. The goal of the current research was to utilize magnetoencephalography (MEG to examine the effect of sex on brain dynamics in adolescents and young adults with FASD during the performance of an auditory oddball task. The stimuli were short trains of 1 kHz “standard” tone bursts (80% randomly interleaved with 1.5 kHz “target” tone bursts (10% and “novel” digital sounds (10%. Participants made motor responses to the target tones. Results are reported for 44 individuals (18 males and 26 females ages 12 through 22 years. Nine males and 13 females had a diagnosis of FASD and the remainder were typically-developing age- and sex-matched controls. The main finding was widespread sex-specific differential activation of the frontal, medial and temporal cortex in adolescents with FASD compared to typically developing controls. Significant differences in evoked-response and time–frequency measures of brain dynamics were observed for all stimulus types in the auditory cortex, inferior frontal sulcus and hippocampus. These results underscore the importance of considering the influence of sex when analyzing neurophysiological data in children with FASD.

  13. Modelling neuronal mechanisms of the processing of tones and phonemes in the higher auditory system

    OpenAIRE

    Larsson, Johan P

    2012-01-01

    S'ha investigat molt tant els mecanismes neuronals bàsics de l'audició com l'organització psicològica de la percepció de la parla. Tanmateix, en ambdós temes n'hi ha una relativa escassetat en quant a modelització. Aquí describim dos treballs de modelització. Un d'ells proposa un nou mecanisme de millora de selectivitat de freqüències que explica resultats de experiments neurofisiològics investigant manifestacions de forward masking y sobretot auditory streaming en l'esco...

  14. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  15. DISTINCT TEMPORALITIES IN THE BREAST CANCER DISEASE PROCESS

    Directory of Open Access Journals (Sweden)

    Janderléia Valéria Dolina

    2014-12-01

    Full Text Available Comprehensive approach study aimed understanding the reflections and contrasts between personal time and medical therapy protocol time in the life of a young woman with breast cancer. Addressed as a situational study and grounded in Beth’s life story about getting sick and dying of cancer at age 34, the study’s data collection process employed interviews, observation and medical record analysis. The construction of the analytic-synthetic box based on the chronology of Beth’s clinical progression, treatment phases and temporal perception of occurrences enabled us to point out a linear medical therapy protocol time identified by the diagnosis and treatment sequencing process. On the other hand, Beth’s experienced time was marked by simultaneous and non-linear events that generated suffering resulting from the disease. Such comprehension highlights the need for healthcare professionals to take into account the time experienced by the patient, thus providing an indispensable cancer therapeutic protocol with a personal character.

  16. Sex-Specific Brain Deficits in Auditory Processing in an Animal Model of Cocaine-Related Schizophrenic Disorders

    Directory of Open Access Journals (Sweden)

    Patricia A. Broderick

    2013-04-01

    Full Text Available Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]. The startle method monitors pre-pulse inhibition (PPI as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]. Key findings are: (a Cocaine significantly reduced PPI in both sexes. (b Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c Physiological saline had no effect on startle in

  17. Sex-specific brain deficits in auditory processing in an animal model of cocaine-related schizophrenic disorders.

    Science.gov (United States)

    Broderick, Patricia A; Rosenbaum, Taylor

    2013-01-01

    Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus) although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]). The startle method monitors pre-pulse inhibition (PPI) as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]). Key findings are: (a) Cocaine significantly reduced PPI in both sexes. (b) Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c) Physiological saline had no effect on startle in either sex

  18. Detection, information fusion, and temporal processing for intelligence in recognition

    Energy Technology Data Exchange (ETDEWEB)

    Casasent, D. [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1996-12-31

    The use of intelligence in vision recognition uses many different techniques or tools. This presentation discusses several of these techniques for recognition. The recognition process is generally separated into several steps or stages when implemented in hardware, e.g. detection, segmentation and enhancement, and recognition. Several new distortion-invariant filters, biologically-inspired Gabor wavelet filter techniques, and morphological operations that have been found very useful for detection and clutter rejection are discussed. These are all shift-invariant operations that allow multiple object regions of interest in a scene to be located in parallel. We also discuss new algorithm fusion concepts by which the results from different detection algorithms are combined to reduce detection false alarms; these fusion methods utilize hierarchical processing and fuzzy logic concepts. We have found this to be most necessary, since no single detection algorithm is best for all cases. For the final recognition stage, we describe a new method of representing all distorted versions of different classes of objects and determining the object class and pose that most closely matches that of a given input. Besides being efficient in terms of storage and on-line computations required, it overcomes many of the problems that other classifiers have in terms of the required training set size, poor generalization with many hidden layer neurons, etc. It is also attractive in its ability to reject input regions as clutter (non-objects) and to learn new object descriptions. We also discuss its use in processing a temporal sequence of input images of the contents of each local region of interest. We note how this leads to robust results in which estimation efforts in individual frames can be overcome. This seems very practical, since in many scenarios a decision need not be made after only one frame of data, since subsequent frames of data enter immediately in sequence.

  19. Auditory functional magnetic resonance imaging in dogs – normalization and group analysis and the processing of pitch in the canine auditory pathways

    OpenAIRE

    Bach, Jan-Peter; Lüpke, Matthias; Dziallas, Peter; Wefstaedt, Patrick; Uppenkamp, Stefan; Seifert, Hermann; Nolte, Ingo

    2016-01-01

    Background Functional magnetic resonance imaging (fMRI) is an advanced and frequently used technique for studying brain functions in humans and increasingly so in animals. A key element of analyzing fMRI data is group analysis, for which valid spatial normalization is a prerequisite. In the current study we applied normalization and group analysis to a dataset from an auditory functional MRI experiment in anesthetized beagles. The stimulation paradigm used in the experiment was composed of si...

  20. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking.

    Science.gov (United States)

    Zorović, M; Hedwig, B

    2011-05-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons.

  1. Treinamento auditivo para transtorno do processamento auditivo: uma proposta de intervenção terapêutica Auditory training for auditory processing disorder: a proposal for therapeutic intervention

    Directory of Open Access Journals (Sweden)

    Alessandra Giannella Samelli

    2010-04-01

    Full Text Available OBJETIVO: verificar a eficácia de um programa informal de treinamento auditivo específico para transtornos do Processamento Auditivo, em um grupo de pacientes com esta alteração, por meio da comparação de pré e pós-testes. MÉTODOS: participaram deste estudo 10 indivíduos de ambos os sexos, da faixa etária entre sete e 20 anos. Todos realizaram avaliação audiológica completa e do processamento auditivo (testes: Fala com Ruído, Sttagered Spondaic Word - SSW, Dicótico de Dígitos, Padrão de Frequência. Após 10 sessões individuais de treinamento auditivo, nas quais foram trabalhadas diretamente as habilidades auditivas alteradas, a avaliação do processamento auditivo foi refeita. RESULTADOS: as porcentagens médias de acertos nas situações pré e pós-treinamento auditivo demonstraram diferenças estatisticamente significantes em todos os testes realizados. CONCLUSÃO: o programa de treinamento auditivo informal empregado mostrou-se eficaz em um grupo de pacientes com transtorno do processamento auditivo, uma vez que determinou diferença estatisticamente significante entre o desempenho pré e pós-testes na avaliação do processamento auditivo, indicando melhora das habilidades auditivas alteradas.PURPOSE: to check the auditory training efficacy in patients with (central auditory processing disorder, by comparing pre and post results. METHODS: ten male and female subjects, from 7 to 20-year old, took part in this study. All participants were submitted to audiological and (central auditory processing evaluations, which included Speech Recognition under in Noise, Staggered Spondaic Word, Dichotic Digits and Frequency Pattern Discrimination tests. Evaluation was carried out after 10 auditory training sessions. RESULTS: statistical differences were verified comparing pre and post results concerning the mean percentage for all tests. CONCLUSION: the informal auditory training program used showed to be efficient for patients with

  2. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  3. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  4. Hemispheric Asymmetries for Temporal Information Processing: Transient Detection versus Sustained Monitoring

    Science.gov (United States)

    Okubo, Matia; Nicholls, Michael E. R.

    2008-01-01

    This study investigated functional differences in the processing of visual temporal information between the left and right hemispheres (LH and RH). Participants indicated whether or not a checkerboard pattern contained a temporal gap lasting between 10 and 40 ms. When the stimulus contained a temporal signal (i.e. a gap), responses were more…

  5. Neural Processing of Auditory Signals and Modular Neural Control for Sound Tropism of Walking Machines

    Directory of Open Access Journals (Sweden)

    Hubert Roth

    2008-11-01

    Full Text Available The specialized hairs and slit sensillae of spiders (Cupiennius salei can sense the airflow and auditory signals in a low-frequency range. They provide the sensor information for reactive behavior, like e.g. capturing a prey. In analogy, in this paper a setup is described where two microphones and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right. The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it.

  6. Auditory sensory processing deficits in sensory gating and mismatch negativity-like responses in the social isolation rat model of schizophrenia

    DEFF Research Database (Denmark)

    Witten, Louise; Oranje, Bob; Mørk, Arne;

    2014-01-01

    Patients with schizophrenia exhibit disturbances in information processing. These disturbances can be investigated with different paradigms of auditory event related potentials (ERP), such as sensory gating in a double click paradigm (P50 suppression) and the mismatch negativity (MMN) component i...... study supports the face validity of the SI reared rat model for schizophrenia.......Patients with schizophrenia exhibit disturbances in information processing. These disturbances can be investigated with different paradigms of auditory event related potentials (ERP), such as sensory gating in a double click paradigm (P50 suppression) and the mismatch negativity (MMN) component...... in an auditory oddball paradigm. The aim of the current study was to test if rats subjected to social isolation, which is believed to induce some changes that mimic features of schizophrenia, displays alterations in sensory gating and MMN-like response. Male Lister-Hooded rats were separated into two groups; one...

  7. Spatio-temporal càdlàg functional marked point processes: Unifying spatio-temporal frameworks

    NARCIS (Netherlands)

    Cronie, O.J.A.; Mateu, J.

    2014-01-01

    This paper defines the class of càdlàg functional marked point processes (CFMPPs). These are (spatio-temporal) point processes marked by random elements which take values in a càdlàg function space, i.e. the marks are given by càdlàg stochastic processes. We generalise notions of marked (spatio-temp

  8. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  9. A Cognitive Neuroscience View of Voice-Processing Abnormalities in Schizophrenia: A Window into Auditory Verbal Hallucinations?

    Science.gov (United States)

    Conde, Tatiana; Gonçalves, Oscar F; Pinheiro, Ana P

    2016-01-01

    Auditory verbal hallucinations (AVH) are a core symptom of schizophrenia. Like "real" voices, AVH carry a rich amount of linguistic and paralinguistic cues that convey not only speech, but also affect and identity, information. Disturbed processing of voice identity, affective, and speech information has been reported in patients with schizophrenia. More recent evidence has suggested a link between voice-processing abnormalities and specific clinical symptoms of schizophrenia, especially AVH. It is still not well understood, however, to what extent these dimensions are impaired and how abnormalities in these processes might contribute to AVH. In this review, we consider behavioral, neuroimaging, and electrophysiological data to investigate the speech, identity, and affective dimensions of voice processing in schizophrenia, and we discuss how abnormalities in these processes might help to elucidate the mechanisms underlying specific phenomenological features of AVH. Schizophrenia patients exhibit behavioral and neural disturbances in the three dimensions of voice processing. Evidence suggesting a role of dysfunctional voice processing in AVH seems to be stronger for the identity and speech dimensions than for the affective domain. PMID:26954598

  10. Spatio-temporal statistical models with applications to atmospheric processes

    International Nuclear Information System (INIS)

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model

  11. Spatio-temporal statistical models with applications to atmospheric processes

    Energy Technology Data Exchange (ETDEWEB)

    Wikle, C.K.

    1996-12-31

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  12. Adaptation in the auditory system: an overview

    OpenAIRE

    David ePérez-González; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...

  13. Auditory Neuropathy

    Science.gov (United States)

    ... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...

  14. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues.

    Science.gov (United States)

    Zheng, Y; Escabí, M A

    2013-08-01

    Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues.

  15. The Effects of Aircraft Noise on the Auditory Language Processing Abilities of English First Language Primary School Learners in Durban, South Africa

    Science.gov (United States)

    Hollander, Cara; de Andrade, Victor Manuel

    2014-01-01

    Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…

  16. Understanding and Identifying the Child at Risk for Auditory Processing Disorders: A Case Method Approach in Examining the Interdisciplinary Role of the School Nurse

    Science.gov (United States)

    Neville, Kathleen; Foley, Marie; Gertner, Alan

    2011-01-01

    Despite receiving increased professional and public awareness since the initial American Speech Language Hearing Association (ASHA) statement defining Auditory Processing Disorders (APDs) in 1993 and the subsequent ASHA statement (2005), many misconceptions remain regarding APDs in school-age children among health and academic professionals. While…

  17. Test Review: R. W. Keith "SCAN-3 for Adolescents and Adults--Tests for Auditory Processing Disorders". San Antonio, TX: Pearson, 2009

    Science.gov (United States)

    Lovett, Benjamin J.; Johnson, Theodore L.

    2010-01-01

    The SCAN-3 is a battery of tasks used for the screening and diagnosis of auditory processing disorder. It is available in two versions, one for children (the SCAN-3: C) and one for adolescents and adults (the SCAN-3: A); the latter version of the SCAN-3 is reviewed in this article, although it is very similar to the child version. The primary…

  18. Processing of natural temporal stimuli by macaque retinal ganglion cells

    NARCIS (Netherlands)

    Hateren, J.H. van; Rüttiger, L.; Lee, B.B.

    2002-01-01

    This study quantifies the performance of primate retinal ganglion cells in response to natural stimuli. Stimuli were confined to the temporal and chromatic domains and were derived from two contrasting environments, one typically northern European and the other a flower show. The performance of the

  19. Survey of Bayesian Models for Modelling of Stochastic Temporal Processes

    Energy Technology Data Exchange (ETDEWEB)

    Ng, B

    2006-10-12

    This survey gives an overview of popular generative models used in the modeling of stochastic temporal systems. In particular, this survey is organized into two parts. The first part discusses the discrete-time representations of dynamic Bayesian networks and dynamic relational probabilistic models, while the second part discusses the continuous-time representation of continuous-time Bayesian networks.

  20. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia.

    Science.gov (United States)

    Dale, Corby L; Brown, Ethan G; Fisher, Melissa; Herman, Alexander B; Dowling, Anne F; Hinkley, Leighton B; Subramaniam, Karuna; Nagarajan, Srikantan S; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  1. The Perception of Auditory Motion.

    Science.gov (United States)

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  2. Experimental analysis of the auditory detection process on avian point counts

    Science.gov (United States)

    Simons, T.R.; Alldredge, M.W.; Pollock, K.H.; Wettroth, J.M.

    2007-01-01

    We have developed a system for simulating the conditions of avian surveys in which birds are identified by sound. The system uses a laptop computer to control a set of amplified MP3 players placed at known locations around a survey point. The system can realistically simulate a known population of songbirds under a range of factors that affect detection probabilities. The goals of our research are to describe the sources and range of variability affecting point-count estimates and to find applications of sampling theory and methodologies that produce practical improvements in the quality of bird-census data. Initial experiments in an open field showed that, on average, observers tend to undercount birds on unlimited-radius counts, though the proportion of birds counted by individual observers ranged from 81% to 132% of the actual total. In contrast to the unlimited-radius counts, when data were truncated at a 50-m radius around the point, observers overestimated the total population by 17% to 122%. Results also illustrate how detection distances decline and identification errors increase with increasing levels of ambient noise. Overall, the proportion of birds heard by observers decreased by 28 ?? 4.7% under breezy conditions, 41 ?? 5.2% with the presence of additional background birds, and 42 ?? 3.4% with the addition of 10 dB of white noise. These findings illustrate some of the inherent difficulties in interpreting avian abundance estimates based on auditory detections, and why estimates that do not account for variations in detection probability will not withstand critical scrutiny. ?? The American Ornithologists' Union, 2007.

  3. The Role of Temporally Coarse Form Processing during Binocular Rivalry

    OpenAIRE

    van Boxtel, Jeroen J. A.; David Alais; Erkelens, Casper J.; Raymond van Ee

    2008-01-01

    Presenting the eyes with spatially mismatched images causes a phenomenon known as binocular rivalry-a fluctuation of awareness whereby each eye's image alternately determines perception. Binocular rivalry is used to study interocular conflict resolution and the formation of conscious awareness from retinal images. Although the spatial determinants of rivalry have been well-characterized, the temporal determinants are still largely unstudied. We confirm a previous observation that conflicting ...

  4. Temporal Analysis of Motif Mixtures using Dirichlet Processes

    OpenAIRE

    Emonet, Rémi; Varadarajan, J.; Odobez, Jean-Marc

    2014-01-01

    International audience In this paper, we present a new model for unsupervised discovery of recurrent temporal patterns (or motifs) in time series (or documents). The model is designed to handle the difficult case of multivariate time series obtained from a mixture of activities, that is, our observations are caused by the superposition of multiple phenomena occurring concurrently and with no synchronization. The model uses nonparametric Bayesian methods to describe both the motifs and thei...

  5. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  6. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  7. Temporal pulse cleaning by a self-diffraction process for ultrashort laser pulses

    Science.gov (United States)

    Xie, Na; Zhou, Kainan; Sun, Li; Wang, Xiaodong; Guo, Yi; Li, Qing; Su, Jingqin

    2014-11-01

    Applying the self-diffraction process to clean ultrashort laser pulses temporally is a recently developed effective way to temporal contrast enhancement. In this paper, we attempt to clean ultrashort laser pulses temporally by the self-diffraction process. Experiments were carried out to study the temporal contrast improvement in the front-end system of an ultraintense and ultrashort laser facility, i.e. the super intense laser for experiment on the extremes (SILEX-I). The results show that the maximum conversion efficiency of the first-order self-diffraction (SD1) pulse is 11%. The temporal contrast of the SD1 signal is improved by two orders of magnitude, i.e. to 103, for a 2.4-ns prepulse with initial contrast of ~10. For a 5.5 -ns prepulse with initial contrast of 2×103, the temporal contrast of the SD1 signal is improved by more than three orders of magnitude.

  8. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  9. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  10. Feature Engineering and Post-Processing for Temporal Expression Recognition Using Conditional Random Fields

    NARCIS (Netherlands)

    S. Fissaha Adafre; M. de Rijke

    2005-01-01

    We present the results of feature engineering and post-processing experiments conducted on a temporal expression recognition task. The former explores the use of different kinds of tagging schemes and of exploiting a list of core temporal expressions during training. The latter is concerned with the

  11. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  12. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  13. Elliptic Bessel processes and elliptic Dyson models realized as temporally inhomogeneous processes

    CERN Document Server

    Katori, Makoto

    2016-01-01

    The Bessel process with parameter $D>1$ and the Dyson model of interacting Brownian motions with coupling constant $\\beta >0$ are extended to the processes, in which the drift term and the interaction terms are given by the logarithmic derivatives of Jacobi's theta functions. They are called the elliptic Bessel process, eBES$^{(D)}$, and the elliptic Dyson model, eDYS$^{(\\beta)}$, respectively. Both are realized on the circumference of a circle $[0, 2 \\pi r)$ with radius $r >0$ as temporally inhomogeneous processes defined in a finite time interval $[0, t_*), t_* < \\infty$. Transformations of them to Schr\\"odinger-type equations with time-dependent potentials lead us to proving that eBES$^{(D)}$ and eDYS$^{(\\beta)}$ can be constructed as the time-dependent Girsanov transformations of Brownian motions. In the special cases where $D=3$ and $\\beta=2$, observables of the processes are defined and the processes are represented for them using the Brownian paths winding round a circle and pinned at time $t_*$. We...

  14. Auditory processing and audiovisual integration revealed by combining psychophysical and fMRI experiments

    NARCIS (Netherlands)

    Tomaskovic, Sonja

    2006-01-01

    This thesis describes experiments conducted in order to investigate human perception and processing of the sound in the human brain. There are several stages in the sound processing. Firstly, in the ear, sound is recorded and transformed into the electrical signal, then information is transported to

  15. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. PMID:27292114

  16. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams

    NARCIS (Netherlands)

    Bezgin, G.; Rybacki, K.; Opstal, A.J. van; Bakker, R.; Shen, K.; Vakorin, V.A.; McIntosh, A.R.; Kötter, R.

    2014-01-01

    Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visua

  17. Spatio-temporal resolution of primary processes of photosynthesis.

    Science.gov (United States)

    Junge, Wolfgang

    2015-01-01

    Technical progress in laser-sources and detectors has allowed the temporal and spatial resolution of chemical reactions down to femtoseconds and Å-units. In photon-excitable systems the key to chemical kinetics, trajectories across the vibrational saddle landscape, are experimentally accessible. Simple and thus well-defined chemical compounds are preferred objects for calibrating new methodologies and carving out paradigms of chemical dynamics, as shown in several contributions to this Faraday Discussion. Aerobic life on earth is powered by solar energy, which is captured by microorganisms and plants. Oxygenic photosynthesis relies on a three billion year old molecular machinery which is as well defined as simpler chemical constructs. It has been analysed to a very high precision. The transfer of excitation between pigments in antennae proteins, of electrons between redox-cofactors in reaction centres, and the oxidation of water by a Mn4Ca-cluster are solid state reactions. ATP, the general energy currency of the cell, is synthesized by a most agile, rotary molecular machine. While the efficiency of photosynthesis competes well with photovoltaics at the time scale of nanoseconds, it is lower by an order of magnitude for crops and again lower for bio-fuels. The enormous energy demand of mankind calls for engineered (bio-mimetic or bio-inspired) solar-electric and solar-fuel devices. PMID:25824647

  18. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  19. Music perception and cognition following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Tramo, M J; Bharucha, J J; Musiek, F E

    1990-01-01

    We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250-8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies

  20. Dynamics of Electrocorticographic (ECoG) Activity in Human Temporal and Frontal Cortical Areas During Music Listening

    Science.gov (United States)

    Potes, Cristhian; Gunduz, Aysegul; Brunner, Peter; Schalk, Gerwin

    2012-01-01

    Previous studies demonstrated that brain signals encode information about specific features of simple auditory stimuli or of general aspects of natural auditory stimuli. How brain signals represent the time course of specific features in natural auditory stimuli is not well understood. In this study, we show in eight human subjects that signals recorded from the surface of the brain (electrocorticography (ECoG)) encode information about the sound intensity of music. ECoG activity in the high gamma band recorded from the posterior part of the superior temporal gyrus as well as from an isolated area in the precentral gyrus were observed to be highly correlated with the sound intensity of music. These results not only confirm the role of auditory cortices in auditory processing but also point to an important role of premotor and motor cortices. They also encourage the use of ECoG activity to study more complex acoustic features of simple or natural auditory stimuli. PMID:22537600

  1. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    Science.gov (United States)

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    Science.gov (United States)

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. PMID:26402725

  3. Laser beam temporal and spatial tailoring for laser shock processing

    Science.gov (United States)

    Hackel, Lloyd; Dane, C. Brent

    2001-01-01

    Techniques are provided for formatting laser pulse spatial shape and for effectively and efficiently delivering the laser energy to a work surface in the laser shock process. An appropriately formatted pulse helps to eliminate breakdown and generate uniform shocks. The invention uses a high power laser technology capable of meeting the laser requirements for a high throughput process, that is, a laser which can treat many square centimeters of surface area per second. The shock process has a broad range of applications, especially in the aerospace industry, where treating parts to reduce or eliminate corrosion failure is very important. The invention may be used for treating metal components to improve strength and corrosion resistance. The invention has a broad range of applications for parts that are currently shot peened and/or require peening by means other than shot peening. Major applications for the invention are in the automotive and aerospace industries for components such as turbine blades, compressor components, gears, etc.

  4. Temporal aggregation in a periodically integrated autoregressive process

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); H.P. Boswijk (Peter)

    1996-01-01

    textabstractA periodically integrated autoregressive process for a time series which is observed S times per year assumes the presence of S - 1 cointegration relations between the annual series containing the seasonal observations, with the additional feature that these relations are different acros

  5. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models. PMID:27550703

  6. Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R.

    Science.gov (United States)

    Yin, Pingbo; Mishkin, Mortimer; Sutter, Mitchell; Fritz, Jonathan B

    2008-12-01

    To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S-) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence-sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task. PMID:18842950

  7. Two adaptation processes in auditory hair cells together can provide an active amplifier

    CERN Document Server

    Vilfan, A; Vilfan, Andrej; Duke, Thomas

    2003-01-01

    The hair cells of the vertebrate inner ear convert mechanical stimuli to electrical signals. Two adaptation mechanisms are known to modify the ionic current flowing through the transduction channels of the hair bundles: a rapid process involves calcium ions binding to the channels; and a slower adaptation is associated with the movement of myosin motors. We present a mathematical model of the hair cell which demonstrates that the combination of these two mechanisms can produce `self-tuned critical oscillations', i.e. maintain the hair bundle at the threshold of an oscillatory instability. The characteristic frequency depends on the geometry of the bundle and on the calcium dynamics, but is independent of channel kinetics. Poised on the verge of vibrating, the hair bundle acts as an active amplifier. However, if the hair cell is sufficiently perturbed, other dynamical regimes can occur. These include slow relaxation oscillations which resemble the hair bundle motion observed in some experimental preparations.

  8. Animal models of spontaneous activity in the healthy and impaired auditory system

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    2015-04-01

    Full Text Available Spontaneous neural activity in the auditory nerve fibers and in auditory cortex in healthy animals is discussed with respect to the question: Is spontaneous activity noise or information carrier? The studies reviewed suggest strongly that spontaneous activity is a carrier of information. Subsequently, I review the numerous findings in the impaired auditory system, particularly with reference to noise trauma and tinnitus. Here the common assumption is that tinnitus reflects increased noise in the auditory system that among others affects temporal processing and interferes with the gap-startle reflex, which is frequently used as a behavioral assay for tinnitus. It is, however, more likely that the increased spontaneous activity in tinnitus, firing rate as well as neural synchrony, carries information that shapes the activity of downstream structures, including non-auditory ones, and leading to the tinnitus percept. The main drivers of that process are bursting and synchronous firing, which facilitates transfer of activity across synapses, and allows formation of auditory objects, such as tinnitus

  9. Variability and information content in auditory cortex spike trains during an interval-discrimination task.

    Science.gov (United States)

    Abolafia, Juan M; Martinez-Garcia, M; Deco, G; Sanchez-Vives, M V

    2013-11-01

    Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task. PMID:23945780

  10. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  11. The Musical Emotional Bursts: A validated set of musical affect bursts to investigate auditory affective processing.

    Directory of Open Access Journals (Sweden)

    Sébastien ePaquette

    2013-08-01

    Full Text Available The Musical Emotional Bursts (MEB consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear and neutrality. These musical bursts were designed to be the musical analogue of the Montreal Affective Voices (MAV – a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 sec improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (n:40 or a clarinet (n:40. The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, nonlinguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli (30 stimuli x 4 [3 emotions + neutral] x 2 instruments by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task; 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80 was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0% and fearful or sad violin (88.0% each MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  12. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    Science.gov (United States)

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703

  13. Dissociating neural mechanisms of temporal sequencing and processing phonemes.

    Science.gov (United States)

    Gelfand, Jenna R; Bookheimer, Susan Y

    2003-06-01

    Using fMRI, we sought to determine whether the posterior, superior portion of Broca's area performs operations on phoneme segments specifically or implements processes general to sequencing discrete units. Twelve healthy volunteers performed two sequence manipulation tasks and one matching task, using strings of syllables and hummed notes. The posterior portion of Broca's area responded specifically to the sequence manipulation tasks, independent of whether the stimuli were composed of phonemes or hummed notes. In contrast, the left supramarginal gyrus was somewhat more specific to sequencing phoneme segments. These results suggest a functional dissociation of the canonical left hemisphere language regions encompassing the "phonological loop," with the left posterior inferior frontal gyrus responding not to the sound structure of language but rather to sequential operations that may underlie the ability to form words out of dissociable elements.

  14. Temporal properties of dynamic processes on complex networks

    Science.gov (United States)

    Turalska, Malgorzata A.

    Many social, biological and technological systems can be viewed as complex networks with a large number of interacting components. However despite recent advancements in network theory, a satisfactory description of dynamic processes arising in such cooperative systems is a subject of ongoing research. In this dissertation the emergence of dynamical complexity in networks of interacting stochastic oscillators is investigated. In particular I demonstrate that networks of two and three state stochastic oscillators present a second-order phase transition with respect to the strength of coupling between individual units. I show that at the critical point fluctuations of the global order parameter are characterized by an inverse-power law distribution and I assess their renewal properties. Additionally, I study the effect that different types of perturbation have on dynamical properties of the model. I discuss the relevance of those observations for the transmission of information between complex systems.

  15. Controling contagious processes on temporal networks via adaptive rewiring

    CERN Document Server

    Belik, Vitaly; Hövel, Philipp

    2015-01-01

    We consider recurrent contagious processes on a time-varying network. As a control procedure to mitigate the epidemic, we propose an adaptive rewiring mechanism for temporary isolation of infected nodes upon their detection. As a case study, we investigate the network of pig trade in Germany. Based on extensive numerical simulations for a wide range of parameters, we demonstrate that the adaptation mechanism leads to a significant extension of the parameter range, for which most of the index nodes (origins of the epidemic) lead to vanishing epidemics. We find that diseases with detection times around a week and infectious periods up to 3 months can be effectively controlled. Furthermore the performance of adaptation is very heterogeneous with respect to the index node. We identify index nodes that are most responsive to the adaptation strategy and quantify the success of the proposed adaptation scheme in dependence on the infectious period and detection times.

  16. Contribution of bioanthropology to the reconstruction of prehistoric productive processes. The external auditory exostoses in the prehispanic population of Gran Canaria

    Directory of Open Access Journals (Sweden)

    Velasco Vázquez, Javier

    2001-06-01

    Full Text Available The aim of this paper is an approach to the role of bioanthropological studies in the reconstruction of the productive processes of past societies. This objective is obtained starting from the survey and valuation of the prevalence of bone exostoses in the auditory canal among the prehistoric inhabitants of Gran Canaria. The auditory exostose is a bone wound well documented through clinical and experimental studies, closely related to the exposure of the auditory canal to cold water. The estimation of this bone anomaly among the analysed population, leads to the definition of outstanding territorial variations in the economic strategies of these human groups.

    En el presente trabajo se pretende abordar el papel de los estudios bioantropológicos en la reconstrucción de los procesos productivos de las sociedades del pasado. Esta finalidad es perseguida a partir del examen y valoración de la prevalencia de exostosis óseas en el canal auditivo en la población prehistórica de Gran Canaria. Las exostosis auditivas constituyen una lesión ósea, bien documentada en trabajos experimentales y clínicos, estrechamente relacionada con la exposición del canal auditivo al agua fría. La estimación de esta anormalidad ósea en el conjunto poblacional analizado permite la definición de importantes variaciones territoriales en las estrategias económicas emprendidas por estos grupos humanos.

  17. White matter microstructure is associated with auditory and tactile processing in children with and without sensory processing disorder

    OpenAIRE

    Yi Shin Chang; Mathilde eGratiot; Owen, Julia P.; Anne eBrandes-Aitken; Desai, Shivani S.; Susanna S Hill; Anne B Arnett; Julia eHarris; Marco, Elysa J.; Pratik eMukherjee

    2016-01-01

    Sensory processing disorders (SPD) affect up to 16% of school-aged children, and contribute to cognitive and behavioral deficits impacting affected individuals and their families. While sensory processing differences are now widely recognized in children with autism, children with sensory-based dysfunction who do not meet autism criteria based on social communication deficits remain virtually unstudied. In a previous pilot diffusion tensor imaging (DTI) study, we demonstrated that boys with ...

  18. Second-order analysis of inhomogeneous spatio-temporal point process data

    NARCIS (Netherlands)

    Gabriel, Edith; Diggle, Peter J.

    2009-01-01

    Second-order methods provide a natural starting point for the analysis of spatial point process data. In this note we extend to the spatio-temporal setting a method proposed by Baddeley et al. [Statistica Neerlandica (2000) Vol. 54, pp. 329-350] for inhomogeneous spatial point process data, and appl

  19. Event-Related Brain Potentials Reveal Anomalies in Temporal Processing of Faces in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James; Dawson, Geraldine; Webb, Sara J.; Panagiotides, Heracles; Carver, Leslie J.

    2004-01-01

    Background: Individuals with autism exhibit impairments in face recognition, and neuroimaging studies have shown that individuals with autism exhibit abnormal patterns of brain activity during face processing. The current study examined the temporal characteristics of face processing in autism and their relation to behavior. Method: High-density…

  20. In search of an auditory engram

    Science.gov (United States)

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory. PMID:15967995

  1. Spike-coding mechanisms of cerebellar temporal processing in classical conditioning and voluntary movements.

    Science.gov (United States)

    Yamaguchi, Kenji; Sakurai, Yoshio

    2014-10-01

    Time is a fundamental and critical factor in daily life. Millisecond timing, which is the underlying temporal processing for speaking, dancing, and other activities, is reported to rely on the cerebellum. In this review, we discuss the cerebellar spike-coding mechanisms for temporal processing. Although the contribution of the cerebellum to both classical conditioning and voluntary movements is well known, the difference of the mechanisms for temporal processing between classical conditioning and voluntary movements is not clear. Therefore, we review the evidence of cerebellar temporal processing in studies of classical conditioning and voluntary movements and report the similarities and differences between them. From some studies, which used tasks that can change some of the temporal properties (e.g., the duration of interstimulus intervals) with keeping identical movements, we concluded that classical conditioning and voluntary movements may share a common spike-coding mechanism because simple spikes in Purkinje cells decrease at predicted times for responses regardless of the intervals between responses or stimulation.

  2. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    U Ajith Kumar

    2012-01-01

    Full Text Available Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13, 41 50 ( = 13, 41-50 (n = 9, and 51-60 (n = 6 years and their non-noise-exposed counterparts (n = 30 in each age group. Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  3. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  4. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis.

    Science.gov (United States)

    Fletcher, Phillip D; Downey, Laura E; Golden, Hannah L; Clark, Camilla N; Slattery, Catherine F; Paterson, Ross W; Schott, Jonathan M; Rohrer, Jonathan D; Rossor, Martin N; Warren, Jason D

    2015-06-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music ('musicophilia') occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717

  5. Testing the weak stationarity of a spatio-temporal point process

    DEFF Research Database (Denmark)

    Ghorbani, Mohammad

    2013-01-01

    A common assumption in analyzing spatial and spatio-temporal point processes is stationarity, while in many real applications because of the environmental effects the stationarity condition is not often met. We propose two types of test statistics to test stationarity for spatio-temporal point...... processes, by adapting, Palahi, Pukkala & Mateu (2009) and by considering the square difference between observed and expected (under stationarity) intensities. We study the efficiency of the new statistics by simulated data, and we apply them to test the stationarity of real data....

  6. ABR and auditory P300 findings inchildren with ADHD

    OpenAIRE

    Schochat Eliane; Scheuer Claudia Ines; Andrade Ênio Roberto de

    2002-01-01

    Auditory processing disorders (APD), also referred as central auditory processing disorders (CAPD) and attention deficit hyperactivity disorders (ADHD) have become popular diagnostic entities for school age children. It has been demonstrated a high incidence of comorbid ADHD with communication disorders and auditory processing disorder. The aim of this study was to investigate ABR and P300 auditory evoked potentials in children with ADHD, in a double-blind study. Twenty-one children, ages bet...

  7. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  8. Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment.

    Science.gov (United States)

    Bedoin, Nathalie; Brisseau, Lucie; Molinier, Pauline; Roch, Didier; Tillmann, Barbara

    2016-01-01

    Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as "baseline condition") followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation. PMID:27378833

  9. Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment

    Science.gov (United States)

    Bedoin, Nathalie; Brisseau, Lucie; Molinier, Pauline; Roch, Didier; Tillmann, Barbara

    2016-01-01

    Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as “baseline condition”) followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation. PMID:27378833

  10. Temporally regular musical primes facilitate subsequent syntax processing in children with Specific Language Impairment

    Directory of Open Access Journals (Sweden)

    Nathalie eBedoin

    2016-06-01

    Full Text Available Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime. Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as ‘baseline condition’ followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976 and the temporal sampling (oscillatory framework for developmental language disorders (Goswami, 2011. Furthermore, they encourage the use of rhythmic structures (even in nonverbal materials to boost linguistic structure processing and outline perspectives for rehabilitation.

  11. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  12. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  13. Mechanisms of Auditory Verbal Hallucination in Schizophrenia

    OpenAIRE

    Raymond eCho; Wayne eWu

    2013-01-01

    Recent work on the mechanisms underlying auditory verbal hallucination (AVH) has been heavily informed by self-monitoring accounts that postulate defects in an internal monitoring mechanism as the basis of AVH. A more neglected alternative is an account focusing on defects in auditory processing, namely a spontaneous activation account of auditory activity underlying AVH. Science is often aided by putting theories in competition. Accordingly, a discussion that systematically contrasts the two...

  14. Temporal processing in the olfactory system: can we see a smell?

    Science.gov (United States)

    Gire, David H; Restrepo, Diego; Sejnowski, Terrence J; Greer, Charles; De Carlos, Juan A; Lopez-Mascaraque, Laura

    2013-05-01

    Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.

  15. Temporal discounting and criminal thinking: understanding cognitive processes to align services.

    Science.gov (United States)

    Varghese, Femina P; Charlton, Shawn R; Wood, Mara; Trower, Emily

    2014-05-01

    Temporal discounting is an indicator of impulsivity that has consistently been found to be associated with risky behaviors such as substance abuse and compulsive gambling. Yet, although criminal acts are clearly risky choice behaviors, no study has examined temporal discounting in the criminal attitudes and behaviors of adult offenders. Yet, such investigations have potential to understand the cognitive processes that underlie various criminal patterns of thinking and may help distinguish between high and low risk offenders. Therefore, the current study endeavored to fill this gap in the literature using 146 male inmates within 5 months of release. Results found that temporal discounting is correlated with reactive criminal thinking but was not correlated with proactive criminal thinking. In addition, inmates with higher rates of incarceration were also more likely to have higher rates of temporal discounting. Results shed light on the different cognitive processes that may underlie different styles of criminal thinking as well as potential differences in the discounting rates depending on history of incarcerations. This finding has implications for service delivery in criminal justice settings as those with reactive criminal thinking may benefit from specialized treatments for temporal discounting. PMID:24635040

  16. Loss of auditory sensitivity from inner hair cell synaptopathy can be centrally compensated in the young but not old brain.

    Science.gov (United States)

    Möhrle, Dorit; Ni, Kun; Varakina, Ksenya; Bing, Dan; Lee, Sze Chim; Zimmermann, Ulrike; Knipper, Marlies; Rüttiger, Lukas

    2016-08-01

    A dramatic shift in societal demographics will lead to rapid growth in the number of older people with hearing deficits. Poorer performance in suprathreshold speech understanding and temporal processing with age has been previously linked with progressing inner hair cell (IHC) synaptopathy that precedes age-dependent elevation of auditory thresholds. We compared central sound responsiveness after acoustic trauma in young, middle-aged, and older rats. We demonstrate that IHC synaptopathy progresses from middle age onward and hearing threshold becomes elevated from old age onward. Interestingly, middle-aged animals could centrally compensate for the loss of auditory fiber activity through an increase in late auditory brainstem responses (late auditory brainstem response wave) linked to shortening of central response latencies. In contrast, old animals failed to restore central responsiveness, which correlated with reduced temporal resolution in responding to amplitude changes. These findings may suggest that cochlear IHC synaptopathy with age does not necessarily induce temporal auditory coding deficits, as long as the capacity to generate neuronal gain maintains normal sound-induced central amplitudes. PMID:27318145

  17. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    Science.gov (United States)

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may

  18. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    ) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...

  19. Temporal Beta Diversity of Bird Assemblages in Agricultural Landscapes: Land Cover Change vs. Stochastic Processes.

    Directory of Open Access Journals (Sweden)

    Andrés Baselga

    Full Text Available Temporal variation in the composition of species assemblages could be the result of deterministic processes driven by environmental change and/or stochastic processes of colonization and local extinction. Here, we analyzed the relative roles of deterministic and stochastic processes on bird assemblages in an agricultural landscape of southwestern France. We first assessed the impact of land cover change that occurred between 1982 and 2007 on (i the species composition (presence/absence of bird assemblages and (ii the spatial pattern of taxonomic beta diversity. We also compared the observed temporal change of bird assemblages with a null model accounting for the effect of stochastic dynamics on temporal beta diversity. Temporal assemblage dissimilarity was partitioned into two separate components, accounting for the replacement of species (i.e. turnover and for the nested species losses (or gains from one time to the other (i.e. nestedness-resultant dissimilarity, respectively. Neither the turnover nor the nestedness-resultant components of temporal variation were accurately explained by any of the measured variables accounting for land cover change (r(2<0.06 in all cases. Additionally, the amount of spatial assemblage heterogeneity in the region did not significantly change between 1982 and 2007, and site-specific observed temporal dissimilarities were larger than null expectations in only 1% of sites for temporal turnover and 13% of sites for nestedness-resultant dissimilarity. Taken together, our results suggest that land cover change in this agricultural landscape had little impact on temporal beta diversity of bird assemblages. Although other unmeasured deterministic process could be driving the observed patterns, it is also possible that the observed changes in presence/absence species composition of local bird assemblages might be the consequence of stochastic processes in which species populations appeared and disappeared from specific

  20. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    NARCIS (Netherlands)

    Talsma, Durk; Senkowski, Daniel; Woldorff, Marty G.

    2009-01-01

    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory

  1. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.

    Science.gov (United States)

    Reinl, Maren; Bartels, Andreas

    2014-11-15

    Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. PMID:25132020

  2. Auditory sequence analysis and phonological skill.

    Science.gov (United States)

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  3. Poor supplementary motor area activation differentiates auditory verbal hallucination from imagining the hallucination.

    Science.gov (United States)

    Raij, Tuukka T; Riekki, Tapani J J

    2012-01-01

    Neuronal underpinnings of auditory verbal hallucination remain poorly understood. One suggested mechanism is brain activation that is similar to verbal imagery but occurs without the proper activation of the neuronal systems that are required to tag the origins of verbal imagery in one's mind. Such neuronal systems involve the supplementary motor area. The supplementary motor area has been associated with awareness of intention to make a hand movement, but whether this region is related to the sense of ownership of one's verbal thought remains poorly known. We hypothesized that the supplementary motor area is related to the distinction between one's own mental processing (auditory verbal imagery) and similar processing that is attributed to non-self author (auditory verbal hallucination). To test this hypothesis, we asked patients to signal the onset and offset of their auditory verbal hallucinations during functional magnetic resonance imaging. During non-hallucination periods, we asked the same patients to imagine the hallucination they had previously experienced. In addition, healthy control subjects signaled the onset and offset of self-paced imagery of similar voices. Both hallucinations and the imagery of hallucinations were associated with similar activation strengths of the fronto-temporal language-related circuitries, but the supplementary motor area was activated more strongly during the imagery than during hallucination. These findings suggest that auditory verbal hallucination resembles verbal imagery in language processing, but without the involvement of the supplementary motor area, which may subserve the sense of ownership of one's own verbal imagery. PMID:24179739

  4. Poor supplementary motor area activation differentiates auditory verbal hallucination from imagining the hallucination☆

    Science.gov (United States)

    Raij, Tuukka T.; Riekki, Tapani J.J.

    2012-01-01

    Neuronal underpinnings of auditory verbal hallucination remain poorly understood. One suggested mechanism is brain activation that is similar to verbal imagery but occurs without the proper activation of the neuronal systems that are required to tag the origins of verbal imagery in one's mind. Such neuronal systems involve the supplementary motor area. The supplementary motor area has been associated with awareness of intention to make a hand movement, but whether this region is related to the sense of ownership of one's verbal thought remains poorly known. We hypothesized that the supplementary motor area is related to the distinction between one's own mental processing (auditory verbal imagery) and similar processing that is attributed to non-self author (auditory verbal hallucination). To test this hypothesis, we asked patients to signal the onset and offset of their auditory verbal hallucinations during functional magnetic resonance imaging. During non-hallucination periods, we asked the same patients to imagine the hallucination they had previously experienced. In addition, healthy control subjects signaled the onset and offset of self-paced imagery of similar voices. Both hallucinations and the imagery of hallucinations were associated with similar activation strengths of the fronto-temporal language-related circuitries, but the supplementary motor area was activated more strongly during the imagery than during hallucination. These findings suggest that auditory verbal hallucination resembles verbal imagery in language processing, but without the involvement of the supplementary motor area, which may subserve the sense of ownership of one's own verbal imagery. PMID:24179739

  5. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  6. Neural network for processing both spatial and temporal data with time based back-propagation

    Science.gov (United States)

    Villarreal, James A. (Inventor); Shelton, Robert O. (Inventor)

    1993-01-01

    Neural networks are computing systems modeled after the paradigm of the biological brain. For years, researchers using various forms of neural networks have attempted to model the brain's information processing and decision-making capabilities. Neural network algorithms have impressively demonstrated the capability of modeling spatial information. On the other hand, the application of parallel distributed models to the processing of temporal data has been severely restricted. The invention introduces a novel technique which adds the dimension of time to the well known back-propagation neural network algorithm. In the space-time neural network disclosed herein, the synaptic weights between two artificial neurons (processing elements) are replaced with an adaptable-adjustable filter. Instead of a single synaptic weight, the invention provides a plurality of weights representing not only association, but also temporal dependencies. In this case, the synaptic weights are the coefficients to the adaptable digital filters. Novelty is believed to lie in the disclosure of a processing element and a network of the processing elements which are capable of processing temporal as well as spacial data.

  7. Ontology Mapping of Business Process Modeling Based on Formal Temporal Logic

    Directory of Open Access Journals (Sweden)

    Irfan Chishti

    2014-08-01

    Full Text Available A business process is the combination of a set of activities with logical order and dependence, whose objective is to produce a desired goal. Business process modeling (BPM using knowledge of the available process modeling techniques enables a common understanding and analysis of a business process. Industry and academics use informal and formal techniques respectively to represent business processes (BP, having the main objective to support an organization. Despite both are aiming at BPM, the techniques used are quite different in their semantics. While carrying out literature research, it has been found that there is no general representation of business process modeling is available that is expressive than the commercial modeling tools and techniques. Therefore, it is primarily conceived to provide an ontology mapping of modeling terms of Business Process Modeling Notation (BPMN, Unified Modeling Language (UML Activity Diagrams (AD and Event Driven Process Chains (EPC to temporal logic. Being a formal system, first order logic assists in thorough understanding of process modeling and its application. However, our contribution is to devise a versatile conceptual categorization of modeling terms/constructs and also formalizing them, based on well accepted business notions, such as action, event, process, connector and flow. It is demonstrated that the new categorization of modeling terms mapped to formal temporal logic, provides the expressive power to subsume business process modeling techniques i.e. BPMN, UML AD and EPC.

  8. Effects of parietal TMS on visual and auditory processing at the primary cortical level -- a concurrent TMS-fMRI study

    DEFF Research Database (Denmark)

    Leitão, Joana; Thielscher, Axel; Werner, Sebastian;

    2013-01-01

    cortices under 3 sensory contexts: visual, auditory, and no stimulation. IPS-TMS increased activations in auditory cortices irrespective of sensory context as a result of direct and nonspecific auditory TMS side effects. In contrast, IPS-TMS modulated activations in the visual cortex in a state...... deactivations induced by auditory activity to TMS sounds. TMS to IPS may increase the responses in visual (or auditory) cortices to visual (or auditory) stimulation via a gain control mechanism or crossmodal interactions. Collectively, our results demonstrate that understanding TMS effects on (uni......Accumulating evidence suggests that multisensory interactions emerge already at the primary cortical level. Specifically, auditory inputs were shown to suppress activations in visual cortices when presented alone but amplify the blood oxygen level-dependent (BOLD) responses to concurrent visual...

  9. Achados na triagem imitanciométrica e de processamento auditivo em escolares Acoustic immitance and auditory processing screening findings in school children

    Directory of Open Access Journals (Sweden)

    Camila Lucia Etges

    2012-12-01

    Full Text Available OBJETIVOS: verificar os achados da triagem imitanciométrica e dos testes da avaliação simplificada de processamento auditivo em escolares. MÉTODO: participaram da pesquisa alunos de 1ª a 4ª séries, de sete a dez anos de idade, de uma escola de ensino público de Porto Alegre. Foram avaliados 130 escolares na triagem imitanciométrica, que foi constituída por timpanometria e pesquisa do reflexo acústico ipsilateral e avaliação simplificada do processamento auditivo, incluindo testes de localização sonora, memória sequencial para sons verbais e memória sequencial para sons não verbais. RESULTADOS: na triagem imitanciométrica 43,08% dos escolares passaram, tendo a curva tipo A como mais frequente. O reflexo acústico em 4000 Hz teve percentual de presença inferior comparado com os demais. Passaram nos testes da avaliação simplificada do processamento auditivo 76,15% das crianças. Além disso, foi observado que o teste no qual os escolares obtiveram pior desempenho foi o de memória sequencial para sons verbais. Falharam na triagem imitanciométrica e na avaliação simplificada de processamento auditivo 12,3% dos escolares. CONCLUSÃO: a curva timpanométrica tipo A foi a mais frequente na população estudada. Na avaliação simplificada do processamento auditivo a maioria dos sujeitos passou, tendo maior frequência de acertos no teste de localização sonora. Não houve associação estatística entre o resultado da triagem imitanciométrica e o resultado da avaliação simplificada de processamento auditivo.PURPOSE: to check acoustic immittance screening findings and results of the simplified evaluation of auditory processing in school children. METHOD: the subjects under this study were students from the 1st to the 4th grade, with ages ranging from seven to ten year-old, from a public school in Porto Alegre. 130 students were evaluated in the immitance screening, which consisted of a tympanometry and an ipsilateral

  10. Avaliação do processamento auditivo em idosos que relatam ouvir bem Auditory processing assessment in older people with no report of hearing disability

    Directory of Open Access Journals (Sweden)

    Maura Ligia Sanchez

    2008-12-01

    Full Text Available Em idosos, os resultados da avaliação comportamental das vias auditivas centrais são considerados de difícil interpretação devido à possível interferência do comprometimento das vias auditivas periféricas. OBJETIVO: Avaliar a eficiência das funções auditivas centrais de idosos que relatam ouvir bem. MATERIAL E MÉTODO: Estudo de casos que incluiu 40 indivíduos na faixa etária de 60 a 75 anos. Os pacientes foram submetidos à avaliação do processamento auditivo que constou de anamnese, exame otorrinolaringológico, audiometria tonal liminar, limiar de reconhecimento de fala, índice de reconhecimento de fala, imitanciometria, pesquisa de reflexos estapedianos, teste de identificação de sentenças sintéticas com mensagem competitiva ipsilateral, teste de padrões de freqüência e teste de dissílabos alternados por meio de tarefa dicótica. RESULTADOS: Gênero, faixa etária e perda auditiva não influenciaram os resultados dos testes de padrões de freqüência e dissílabos alternados por meio de tarefa dicótica; faixa etária e perda auditiva influenciaram os resultados do teste de identificação de sentenças com mensagem competitiva ipsilateral. Porcentagens de acertos abaixo dos padrões da normalidade de adultos foram observadas nos três testes que acessam as funções auditivas centrais. CONCLUSÃO: Indivíduos idosos que relatam ouvir bem apresentam prevalência relevante de sinais de ineficiência das funções auditivas centrais.In the elderly, the results of central auditory pathways behavioral assessments are considered to be difficult to read because of the possible interference of peripheral auditory pathway involvement. AIM: Assess the efficacy of the central auditory function in elderly patients who do not complain of hearing. MATERIALS AND METHODS: Case study involving 40 individuals within the age range of 60 to 75 years. The patients underwent auditory processing evaluation based on anamnesis

  11. Processes driving temporal dynamics in the nested pattern of waterbird communities

    Science.gov (United States)

    Sebastián-González, Esther; Botella, Francisco; Paracuellos, Mariano; Sánchez-Zapata, José Antonio

    2010-03-01

    Nestedness is a common pattern of bird communities in habitat patches, and it describes the situation where smaller communities form proper subsets of larger communities. Several studies have examined the processes causing nestedness and the implications for conservation, but few have considered the temporal changes in these processes. We used data from 6 years and two seasons (wintering and breeding) to explore the temporal changes in the causes of the nested pattern of a waterbird community in man-made irrigation ponds. Nestedness was significant in both seasons and in all years, and thus temporally stable. Despite the nestedness of waterbird communities, the proportion of idiosyncratic species (species that do not follow the nested pattern) was higher than in other studies. Furthermore, the idiosyncratic species often had endangered status. Selective colonisation and, mainly, selective extinction were the most important factors producing the nested pattern. In addition, the nested structure of the microhabitats at the ponds also caused the pattern. The causes of the pattern changed temporally even in the absence of big disturbance events. In general, breeding communities were more stable than wintering communities, and the seasonal differences in the causes of the nestedness were larger than the inter-annual differences. Consequently, studies of community nestedness from only one snapshot in time should be considered with caution.

  12. Neural Androgen Receptor Deletion Impairs the Temporal Processing of Objects and Hippocampal CA1-Dependent Mechanisms.

    Science.gov (United States)

    Picot, Marie; Billard, Jean-Marie; Dombret, Carlos; Albac, Christelle; Karameh, Nida; Daumas, Stéphanie; Hardin-Pouzet, Hélène; Mhaouty-Kodja, Sakina

    2016-01-01

    We studied the role of testosterone, mediated by the androgen receptor (AR), in modulating temporal order memory for visual objects. For this purpose, we used male mice lacking AR specifically in the nervous system. Control and mutant males were gonadectomized at adulthood and supplemented with equivalent amounts of testosterone in order to normalize their hormonal levels. We found that neural AR deletion selectively impaired the processing of temporal information for visual objects, without affecting classical object recognition or anxiety-like behavior and circulating corticosterone levels, which remained similar to those in control males. Thus, mutant males were unable to discriminate between the most recently seen object and previously seen objects, whereas their control littermates showed more interest in exploring previously seen objects. Because the hippocampal CA1 area has been associated with temporal memory for visual objects, we investigated whether neural AR deletion altered the functionality of this region. Electrophysiological analysis showed that neural AR deletion affected basal glutamate synaptic transmission and decreased the magnitude of N-methyl-D-aspartate receptor (NMDAR) activation and high-frequency stimulation-induced long-term potentiation. The impairment of NMDAR function was not due to changes in protein levels of receptor. These results provide the first evidence for the modulation of temporal processing of information for visual objects by androgens, via AR activation, possibly through regulation of NMDAR signaling in the CA1 area in male mice.

  13. Neural Androgen Receptor Deletion Impairs the Temporal Processing of Objects and Hippocampal CA1-Dependent Mechanisms.

    Directory of Open Access Journals (Sweden)

    Marie Picot

    Full Text Available We studied the role of testosterone, mediated by the androgen receptor (AR, in modulating temporal order memory for visual objects. For this purpose, we used male mice lacking AR specifically in the nervous system. Control and mutant males were gonadectomized at adulthood and supplemented with equivalent amounts of testosterone in order to normalize their hormonal levels. We found that neural AR deletion selectively impaired the processing of temporal information for visual objects, without affecting classical object recognition or anxiety-like behavior and circulating corticosterone levels, which remained similar to those in control males. Thus, mutant males were unable to discriminate between the most recently seen object and previously seen objects, whereas their control littermates showed more interest in exploring previously seen objects. Because the hippocampal CA1 area has been associated with temporal memory for visual objects, we investigated whether neural AR deletion altered the functionality of this region. Electrophysiological analysis showed that neural AR deletion affected basal glutamate synaptic transmission and decreased the magnitude of N-methyl-D-aspartate receptor (NMDAR activation and high-frequency stimulation-induced long-term potentiation. The impairment of NMDAR function was not due to changes in protein levels of receptor. These results provide the first evidence for the modulation of temporal processing of information for visual objects by androgens, via AR activation, possibly through regulation of NMDAR signaling in the CA1 area in male mice.

  14. Word recognition in competing babble and the effects of age, temporal processing, and absolute sensitivity.

    Science.gov (United States)

    Snell, Karen B; Mapes, Frances M; Hickman, Elizabeth D; Frisina, D Robert

    2002-08-01

    This study was designed to clarify whether speech understanding in a fluctuating background is related to temporal processing as measured by the detection of gaps in noise bursts. Fifty adults with normal hearing or mild high-frequency hearing loss served as subjects. Gap detection thresholds were obtained using a three-interval, forced-choice paradigm. A 150-ms noise burst was used as the gap carrier with the gap placed close to carrier onset. A high-frequency masker without a temporal gap was gated on and off with the noise bursts. A continuous white-noise floor was present in the background. Word scores for the subjects were obtained at a presentation level of 55 dB HL in competing babble levels of 50, 55, and 60 dB HL. A repeated measures analysis of covariance of the word scores examined the effects of age, absolute sensitivity, and temporal sensitivity. The results of the analysis indicated that word scores in competing babble decreased significantly with increases in babble level, age, and gap detection thresholds. The effects of absolute sensitivity on word scores in competing babble were not significant. These results suggest that age and temporal processing influence speech understanding in fluctuating backgrounds in adults with normal hearing or mild high-frequency hearing loss.

  15. From sounds to words: a neurocomputational model of adaptation, inhibition and memory processes in auditory change detection.

    Science.gov (United States)

    Garagnani, Max; Pulvermüller, Friedemann

    2011-01-01

    Most animals detect sudden changes in trains of repeated stimuli but only some can learn a wide range of sensory patterns and recognise them later, a skill crucial for the evolutionary success of higher mammals. Here we use a neural model mimicking the cortical anatomy of sensory and motor areas and their connections to explain brain activity indexing auditory change and memory access. Our simulations indicate that while neuronal adaptation and local inhibition of cortical activity can explain aspects of change detection as observed when a repeated unfamiliar sound changes in frequency, the brain dynamics elicited by auditory stimulation with well-known patterns (such as meaningful words) cannot be accounted for on the basis of adaptation and inhibition alone. Specifically, we show that the stronger brain responses observed to familiar stimuli in passive oddball tasks are best explained in terms of activation of memory circuits that emerged in the cortex during the learning of these stimuli. Such memory circuits, and the activation enhancement they entail, are absent for unfamiliar stimuli. The model illustrates how basic neurobiological mechanisms, including neuronal adaptation, lateral inhibition, and Hebbian learning, underlie neuronal assembly formation and dynamics, and differentially contribute to the brain's major change detection response, the mismatch negativity. PMID:20728545

  16. Visual-auditory differences in duration discrimination of intervals in the subsecond and second range

    Directory of Open Access Journals (Sweden)

    Thomas eRammsayer

    2015-10-01

    Full Text Available A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration and longer (1000-ms standard duration intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the one-second range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the one-second range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.

  17. Calling song recognition in female crickets: temporal tuning of identified brain neurons matches behavior.

    Science.gov (United States)

    Kostarakos, Konstantinos; Hedwig, Berthold

    2012-07-11

    Phonotactic orientation of female crickets is tuned to the temporal pattern of the male calling song. We analyzed the phonotactic selectivity of female crickets to varying temporal features of calling song patterns and compared it with the auditory response properties of the ascending interneuron AN1 (herein referred to as TH1-AC1) and four newly identified local brain neurons. The neurites of all brain neurons formed a ring-like branching pattern in the anterior protocerebrum that overlapped with the axonal arborizations of TH1-AC1. All brain neurons responded phasically to the sound pulses of a species-specific chirp. The spike activity of TH1-AC1 and the local interneuron, B-LI2, copied different auditory patterns regardless of their temporal structure. Two other neurons, B-LI3 and B-LC3, matched the temporal selectivity of the phonotactic responses but also responded to some nonattractive patterns. Neuron B-LC3 linked the bilateral auditory areas in the protocerebrum. One local brain neuron, B-LI4, received inhibitory as well as excitatory synaptic inputs. Inhibition was particularly pronounced for nonattractive pulse patterns, reducing its spike activity. When tested with different temporal patterns, B-LI4 exhibited bandpass response properties; its different auditory response functions significantly matched the tuning of phonotaxis. Temporal selectivity was established already for the second of two sound pulses separated by one species-specific pulse interval. Temporal pattern recognition in the cricket brain occurs within the anterior protocerebrum at the first stage of auditory processing. It is crucially linked to a change in auditory responsiveness during pulse intervals and based on fast interactions of inhibition and excitation.

  18. Avaliação do processamento auditivo de sons não-verbais em indivíduos com doença de Parkinson Auditory processing evaluation using nonverbal sounds in subjects with Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Eliana S. Miranda

    2004-08-01

    Full Text Available Entendemos que o processamento auditivo é o processo de como o indivíduo gerencia as informações recebidas auditivamente. É reconhecida a importância da percepção auditiva de seqüências e padrões temporais de sons na aquisição e compreensão dos componentes simbólicos da linguagem, sendo que as propriedades acústicas da fala poderiam ser reduzidas aos componentes básicos de duração e freqüência. Entre os eventos que percebemos por meio da audição, a fala é o mais importante, sendo que esta pode apresentar-se alterada na Doença de Parkinson. OBJETIVO: Avaliar o desempenho de portadores de doença de Parkinson nos Testes de Padrão de Freqüência e de Duração. FORMA DE ESTUDO: Clínico prospectivo. MATERIAL E MÉTODO: Avaliou-se a identificação de estímulos sonoros não-verbais, por meio de três tipos de respostas: humming, nomeação e apontar. Os estímulos eram constituídos por dez seqüências de três e quatro sons, variando em freqüência e duração. RESULTADOS: Mostraram-se que não houve diferença quanto ao tipo de resposta; houve um melhor desempenho utilizando o parâmetro três estímulos em oposição a quatro e o aspecto da duração em oposição ao de freqüência. Ressalta-se ainda que o desempenho da população avaliada foi inferior aos indivíduos normais. CONCLUSÃO: A capacidade de ordenação temporal de sons é uma importante função do sistema auditivo nervoso central. Essa habilidade permite que o ouvinte faça discriminações baseadas na ordenação ou seqüenciação de estímulos auditivos. Sendo assim, a contribuição desse estudo é significativa, uma vez que inicia a reflexão do processo de análise e interpretação de sons pelos indivíduos com doença de Parkinson.Auditory processing, as we understand, refers to how the individual handles with auditory information. The importance of auditory perception of sound sequences and temporal patterns in acquiring and comprehending

  19. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  20. Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes.

    Science.gov (United States)

    Davey, James; Thompson, Hannah E; Hallam, Glyn; Karapanagiotidis, Theodoros; Murphy, Charlotte; De Caso, Irene; Krieger-Redwood, Katya; Bernhardt, Boris C; Smallwood, Jonathan; Jefferies, Elizabeth

    2016-08-15

    Making sense of the world around us depends upon selectively retrieving information relevant to our current goal or context. However, it is unclear whether selective semantic retrieval relies exclusively on general control mechanisms recruited in demanding non-semantic tasks, or instead on systems specialised for the control of meaning. One hypothesis is that the left posterior middle temporal gyrus (pMTG) is important in the controlled retrieval of semantic (not non-semantic) information; however this view remains controversial since a parallel literature links this site to event and relational semantics. In a functional neuroimaging study, we demonstrated that an area of pMTG implicated in semantic control by a recent meta-analysis was activated in a conjunction of (i) semantic association over size judgements and (ii) action over colour feature matching. Under these circumstances the same region showed functional coupling with the inferior frontal gyrus - another crucial site for semantic control. Structural and functional connectivity analyses demonstrated that this site is at the nexus of networks recruited in automatic semantic processing (the default mode network) and executively demanding tasks (the multiple-demand network). Moreover, in both task and task-free contexts, pMTG exhibited functional properties that were more similar to ventral parts of inferior frontal cortex, implicated in controlled semantic retrieval, than more dorsal inferior frontal sulcus, implicated in domain-general control. Finally, the pMTG region was functionally correlated at rest with other regions implicated in control-demanding semantic tasks, including inferior frontal gyrus and intraparietal sulcus. We suggest that pMTG may play a crucial role within a large-scale network that allows the integration of automatic retrieval in the default mode network with executively-demanding goal-oriented cognition, and that this could support our ability to understand actions and non

  1. Rapid context-based identification of target sounds in an auditory scene

    Science.gov (United States)

    Gamble, Marissa L.; Woldorff, Marty G.

    2015-01-01

    To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. While it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff (2014) recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning ~60 ms) between feature-deviating Target stimuli and physically equivalent feature-deviating Nontargets, reflecting a rapid Target-detection process. This was followed shortly later (~130 ms) by the lateralized N2ac ERP activation, reflecting the focusing of auditory spatial attention toward the Target sound and paralleling attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, Target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included Target and Nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the Target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for subjects with faster response times. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a Target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound. PMID:25848684

  2. Processamento auditivo, leitura e escrita na síndrome de Silver-Russell: relato de caso Auditory processing, reading and writing in the Silver-Russell syndrome: case report

    Directory of Open Access Journals (Sweden)

    Patrícia Fernandes Garcia

    2012-03-01

    -language pathology aspects of auditory processing, reading and writing of a male patient diagnosed with Silver-Russell syndrome. With two months of age the patient presented weight-for-height deficit; broad forehead; small, prominent and low-set ears; high palate; discrete micrognathia; blue sclera; cafe-au-lait spots; overlapping of the first and second right toes; gastroesophageal reflux; high-pitched voice and cry; mild neuropsychomotor development delay; and difficulty to gain weight, receiving the diagnosis of the syndrome. In the psychological evaluation, conducted when he was 8 years old, the patient presented normal intellectual level, with cognitive difficulties involving sustained attention, concentration, immediate verbal memory, and emotional and behavioral processes. For an assessment of reading and writing and their underlying processes, carried out at age 9, the following tests were used: Reading Comprehension of Expository Texts, Phonological Abilities Profile, Auditory Discrimination Test, spontaneous writing, Scholastic Performance Test (SPT, Rapid Automatized Naming Test (RANT, and phonological working memory. He showed difficulties in all tests, with scores below expected for his age. In the auditory processing assessment, monotic, diotic and dichotic tests were conducted. Altered results were found for sustained and selective auditory attention abilities, sequencial memory for verbal and non-verbal sounds, and temporal resolution. It can be concluded that the patient presents alterations in the learning of reading and writing that might be secondary to the Silver-Russell syndrome, however, these difficulties can also be due to deficits in auditory processing abilities.

  3. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy Avaliação eletrofisiológica e comportamental da audição em individuos com epilepsia em lobo temporal esquerdo

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha

    2010-02-01

    Full Text Available The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE for subjects with left mesial temporal sclerosis (LMTS in relation to the behavioral test-Dichotic Digits Test (DDT, event-related potential (P300, and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.O objetivo deste estudo foi determinar a repercussão da epilepsia de lobo temporal esquerdo (LTE em indivíduos com esclerose mesial temporal esquerda (EMTE em relação à avaliação auditiva comportamental-Teste Dicótico de Dígitos (TDD, ao Potencial Evocado Auditivo de Longa Latência (P300 e comparar o P300 do lobo temporal esquerdo e direito. Estudamos 12 indivíduos com EMTE (grupo estudo e 12 indivíduos controle com desenvolvimento típico. Analisamos as relações entre a latência e amplitude do P300, obtidos nas posições C3A1,C3A2,C4A1 e C4A2 e os resultados obtidos no TDD. No TDD, o grupo estudo apresentou pior desempenho em relação ao grupo controle, sendo esta diferença estatisticamente significante em ambas as orelhas. Para o P300, observamos que em seis indivíduos com EMTE o potencial foi ausente. Para a latência e amplitude, verificamos que estes

  4. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    OpenAIRE

    Golden, Hannah L.; Jennifer L. Agustus; Johanna C. Goll; Downey, Laura E; Mummery, Catherine J.; Jonathan M Schott; Crutch, Sebastian J.; Jason D Warren

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘back...

  5. Multiple arithmetic operations in a single neuron: the recruitment of adaptation processes in the cricket auditory pathway depends on sensory context.

    Science.gov (United States)

    Hildebrandt, K Jannis; Benda, Jan; Hennig, R Matthias

    2011-10-01

    Sensory pathways process behaviorally relevant signals in various contexts and therefore have to adapt to differing background conditions. Depending on changes in signal statistics, this adjustment might be a combination of two fundamental computational operations: subtractive adaptation shifting a neuron's threshold and divisive gain control scaling its sensitivity. The cricket auditory system has to deal with highly stereotyped conspecific songs at low carrier frequencies, and likely much more variable predator signals at high frequencies. We proposed that due to the differences between the two signal classes, the operation that is implemented by adaptation depends on the carrier frequency. We aimed to identify the biophysical basis underlying the basic computational operations of subtraction and division. We performed in vivo intracellular and extracellular recordings in a first-order auditory interneuron (AN2) that is active in both mate recognition and predator avoidance. We demonstrated subtractive shifts at the carrier frequency of conspecific songs and division at the predator-like carrier frequency. Combined application of current injection and acoustic stimuli for each cell allowed us to demonstrate the subtractive effect of cell-intrinsic adaptation currents. Pharmacological manipulation enabled us to demonstrate that presynaptic inhibition is most likely the source of divisive gain control. We showed that adjustment to the sensory context can depend on the class of signals that are relevant to the animal. We further revealed that presynaptic inhibition is a simple mechanism for divisive operations. Unlike other proposed mechanisms, it is widely available in the sensory periphery of both vertebrates and invertebrates.

  6. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...

  7. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...

  8. Temporal Processing Ability Is Related to Ear-Asymmetry for Detecting Time Cues in Sound: A Mismatch Negativity (MMN) Study

    Science.gov (United States)

    Todd, Juanita; Finch, Brayden; Smith, Ellen; Budd, Timothy W.; Schall, Ulrich

    2011-01-01

    Temporal and spectral sound information is processed asymmetrically in the brain with the left-hemisphere showing an advantage for processing the former and the right-hemisphere for the latter. Using monaural sound presentation we demonstrate a context and ability dependent ear-asymmetry in brain measures of temporal change detection. Our measure…

  9. Effect of temporal predictability on exogenous attentional modulation of feedforward processing in the striate cortex.

    Science.gov (United States)

    Dassanayake, Tharaka L; Michie, Patricia T; Fulham, Ross

    2016-07-01

    Non-informative peripheral visual cues facilitate extrastriate processing of targets [as indexed by enhanced amplitude of contralateral P1 event-related potential (ERP) component] presented at the cued location as opposed to those presented at uncued locations, at short cue-target stimulus onset asynchrony (SOA). Recently, two lines of research are emerging to suggest that the locus of attentional modulation is flexible and depends on 1) perceptual load and 2) temporal predictability of visual stimuli. We aimed to examine the effect of temporal predictability on attentional modulation of feed-forward activation of the striate cortex (as indexed by the C1 ERP component) by high-perceptual-load (HPL) stimuli. We conducted two ERP experiments where exogenously-cued HPL targets were presented under two temporal predictability conditions. In Experiment 1 [high-temporal-predictability (HTP) condition], 17 healthy subjects (age 18-26years) performed a line-orientation discrimination task on HPL targets presented in the periphery of the left upper or diagonally opposite right lower visual field, validly or invalidly cued by peripheral cues. SOA was fixed at 160ms. In Experiment 2 [low-temporal-predictability (LTP) condition], (n=10, age 19-36years) we retained HPL stimuli but randomly intermixed short-SOA trials with long-SOA (1000ms) trials in the task-blocks. In Experiment 1 and the short-SOA condition of the Experiment 2, validly-cued targets elicited significantly faster reaction times and larger contralateral P1, consistent with previous literature. A significant attentional enhancement of C1 amplitude was also observed in the HTP, but not LTP condition. The findings suggest that exogenous visual attention can facilitate the earliest stage of cortical processing under HTP conditions. PMID:27114044

  10. Global data for ecology and epidemiology: a novel algorithm for temporal Fourier processing MODIS data.

    Directory of Open Access Journals (Sweden)

    Jörn P W Scharlemann

    Full Text Available BACKGROUND: Remotely-sensed environmental data from earth-orbiting satellites are increasingly used to model the distribution and abundance of both plant and animal species, especially those of economic or conservation importance. Time series of data from the MODerate-resolution Imaging Spectroradiometer (MODIS sensors on-board NASA's Terra and Aqua satellites offer the potential to capture environmental thermal and vegetation seasonality, through temporal Fourier analysis, more accurately than was previously possible using the NOAA Advanced Very High Resolution Radiometer (AVHRR sensor data. MODIS data are composited over 8- or 16-day time intervals that pose unique problems for temporal Fourier analysis. Applying standard techniques to MODIS data can introduce errors of up to 30% in the estimation of the amplitudes and phases of the Fourier harmonics. METHODOLOGY/PRINCIPAL FINDINGS: We present a novel spline-based algorithm that overcomes the processing problems of composited MODIS data. The algorithm is tested on artificial data generated using randomly selected values of both amplitudes and phases, and provides an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that capture the seasonality in MODIS data for the period from 2001 to 2005. CONCLUSIONS/SIGNIFICANCE: Global temporal Fourier processed images of 1 km MODIS data for Middle Infrared Reflectance, day- and night-time Land Surface Temperature (LST, Normalised Difference Vegetation Index (NDVI, and Enhanced Vegetation Index (EVI are presented for ecological and epidemiological applications. The finer spatial and temporal resolution, combined with the greater geolocational and spectral accuracy of the MODIS instruments, compared with previous multi-temporal data sets, mean that these data may be used with greater confidence in species' distribution modelling.

  11. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    Full Text Available BACKGROUND: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. METHODOLOGY/PRINCIPAL FINDINGS: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. CONCLUSIONS/SIGNIFICANCE: The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns

  12. Survey of some recent advances in spatial-temporal point processes

    OpenAIRE

    Greenspan, Ben

    2013-01-01

    Spatial-temporal point processes have been useful for applications in many fields, including the study of earthquakes, wildfires, and other natural disasters, as well as forests and other ecological data, neurological data, invasive species, epidemics, spatial debris, and many others. Recent works draw new conclusions about the general model, applications to earthquakes, higher-order statistics, or residual analysis. Within each chapter the principles from citations are summarized. Various...

  13. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    Science.gov (United States)

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge.

  14. Spatio-temporal pattern of vestibular information processing after brief caloric stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Marcelli, Vincenzo [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Esposito, Fabrizio [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Department of Cognitive Neurosciences, University of Maastricht, Maastricht (Netherlands)], E-mail: fabrizio.esposito@unina.it; Aragri, Adriana [Department of Neurological Sciences, Second University of Naples, Naples (Italy); Furia, Teresa; Riccardi, Pasquale [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Tosetti, Michela; Biagi, Laura [I.R.C.S.S. ' Stella Maris' , Pisa (Italy); Marciano, Elio [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Di Salle, Francesco [Department of Cognitive Neurosciences, University of Maastricht, Maastricht (Netherlands); I.R.C.S.S. ' Stella Maris' , Pisa (Italy); Department of Neurosciences, University of Pisa, Pisa (Italy)

    2009-05-15

    Processing of vestibular information at the cortical and subcortical level is essential for head and body orientation in space and self-motion perception, but little is known about the neural dynamics of the brain regions of the vestibular system involved in this task. Neuroimaging studies using both galvanic and caloric stimulation have shown that several distinct cortical and subcortical structures can be activated during vestibular information processing. The insular cortex has been often targeted and presented as the central hub of the vestibular cortical system. Since very short pulses of cold water ear irrigation can generate a strong and prolonged vestibular response and a nystagmus, we explored the effects of this type of caloric stimulation for assessing the blood-oxygen-level-dependent (BOLD) dynamics of neural vestibular processing in a whole-brain event-related functional magnetic resonance imaging (fMRI) experiment. We evaluated the spatial layout and the temporal dynamics of the activated cortical and subcortical regions in time-locking with the instant of injection and were able to extract a robust pattern of neural activity involving the contra-lateral insular cortex, the thalamus, the brainstem and the cerebellum. No significant correlation with the temporal envelope of the nystagmus was found. The temporal analysis of the activation profiles highlighted a significantly longer duration of the evoked BOLD activity in the brainstem compared to the insular cortex suggesting a functional de-coupling between cortical and subcortical activity during the vestibular response.

  15. Spatio-temporal pattern of vestibular information processing after brief caloric stimulation

    International Nuclear Information System (INIS)

    Processing of vestibular information at the cortical and subcortical level is essential for head and body orientation in space and self-motion perception, but little is known about the neural dynamics of the brain regions of the vestibular system involved in this task. Neuroimaging studies using both galvanic and caloric stimulation have shown that several distinct cortical and subcortical structures can be activated during vestibular information processing. The insular cortex has been often targeted and presented as the central hub of the vestibular cortical system. Since very short pulses of cold water ear irrigation can generate a strong and prolonged vestibular response and a nystagmus, we explored the effects of this type of caloric stimulation for assessing the blood-oxygen-level-dependent (BOLD) dynamics of neural vestibular processing in a whole-brain event-related functional magnetic resonance imaging (fMRI) experiment. We evaluated the spatial layout and the temporal dynamics of the activated cortical and subcortical regions in time-locking with the instant of injection and were able to extract a robust pattern of neural activity involving the contra-lateral insular cortex, the thalamus, the brainstem and the cerebellum. No significant correlation with the temporal envelope of the nystagmus was found. The temporal analysis of the activation profiles highlighted a significantly longer duration of the evoked BOLD activity in the brainstem compared to the insular cortex suggesting a functional de-coupling between cortical and subcortical activity during the vestibular response.

  16. Are left fronto-temporal brain areas a prerequisite for normal music-syntactic processing?

    Science.gov (United States)

    Sammler, Daniela; Koelsch, Stefan; Friederici, Angela D

    2011-06-01

    An increasing number of neuroimaging studies in music cognition research suggest that "language areas" are involved in the processing of musical syntax, but none of these studies clarified whether these areas are a prerequisite for normal syntax processing in music. The present electrophysiological experiment tested whether patients with lesions in Broca's area (N=6) or in the left anterior temporal lobe (N=7) exhibit deficits in the processing of structure in music compared to matched healthy controls (N=13). A chord sequence paradigm was applied, and the amplitude and scalp topography of the Early Right Anterior Negativity (ERAN) was examined, an electrophysiological marker of musical syntax processing that correlates with activity in Broca's area and its right hemisphere homotope. Left inferior frontal gyrus (IFG) (but not anterior s