WorldWideScience

Sample records for auditory temporal processing

  1. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  2. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  3. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  4. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  5. Altered Auditory and Multisensory Temporal Processing in Autism Spectrum Disorders

    Science.gov (United States)

    Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Cascio, Carissa J.; Stone, Wendy L.; Wallace, Mark T.

    2011-01-01

    Autism spectrum disorders (ASD) are characterized by deficits in social reciprocity and communication, as well as by repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli also have been reported frequently. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically developing (TD) children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ) tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies. PMID:21258617

  6. Altered auditory and multisensory temporal processing in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Leslie D Kwakye

    2011-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by deficits in social reciprocity and communication, as well as repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli have also frequently been reported. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically-developing (TD children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies.

  7. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  8. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    Science.gov (United States)

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  9. Carrier-dependent temporal processing in an auditory interneuron.

    Science.gov (United States)

    Sabourin, Patrick; Gottlieb, Heather; Pollack, Gerald S

    2008-05-01

    Signal processing in the auditory interneuron Omega Neuron 1 (ON1) of the cricket Teleogryllus oceanicus was compared at high- and low-carrier frequencies in three different experimental paradigms. First, integration time, which corresponds to the time it takes for a neuron to reach threshold when stimulated at the minimum effective intensity, was found to be significantly shorter at high-carrier frequency than at low-carrier frequency. Second, phase locking to sinusoidally amplitude modulated signals was more efficient at high frequency, especially at high modulation rates and low modulation depths. Finally, we examined the efficiency with which ON1 detects gaps in a constant tone. As reflected by the decrease in firing rate in the vicinity of the gap, ON1 is better at detecting gaps at low-carrier frequency. Following a gap, firing rate increases beyond the pre-gap level. This "rebound" phenomenon is similar for low- and high-carrier frequencies.

  10. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery.

  11. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  12. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of aud...

  13. Receptive amusia: temporal auditory processing deficit in a professional musician following a left temporo-parietal lesion.

    Science.gov (United States)

    Di Pietro, Marie; Laganaro, Marina; Leemann, Béatrice; Schnider, Armin

    2004-01-01

    This study examined the musical processing in a professional musician who suffered from amusia after a left temporo-parietal stroke. The patient showed preserved metric judgement and normal performance in all aspects of melodic processing. By contrast, he lost the ability to discriminate or reproduce rhythms. Arrhythmia was only observed in the auditory modality: discrimination of auditorily presented rhythms was severely impaired, whereas performance was normal in the visual modality. Moreover, a length effect was observed in discrimination of rhythm, while this was not the case for melody discrimination. The arrhythmia could not be explained by low-level auditory processing impairments such as interval and length discrimination and the impairment was limited to auditory input, since the patient produced correct rhythmic patterns from a musical score. Since rhythm processing was selectively disturbed in the auditory modality, the arrhythmia cannot be attributed to a impairment of supra-modal temporal processing. Rather, our findings suggest modality-specific encoding of musical temporal information. Besides, it is proposed that the processing of auditory rhythmic sequences involves a specific left hemispheric temporal buffer.

  14. Superior Temporal Activity for the Retrieval Process of Auditory-Word Associations

    Directory of Open Access Journals (Sweden)

    Toshimune Kambara

    2011-10-01

    Full Text Available Previous neuroimaging studies have reported that learning multisensory associations involves the superior temporal regions (Tanabe et al, 2005. However, the neural mechanisms underlying the retrieval of multi-sensory associations were unclear. This functional MRI (fMRI study investigated brain activations during the retrieval of multi-sensory associations. Eighteen right-handed college-aged Japanese participants learned associations between meaningless pictures and words (Vw, meaningless sounds and words (Aw, and meaningless sounds and visual words (W. During fMRI scanning, participants were presented with old and new words and were required to judge whether the words were included in the conditions of Vw, Aw, W or New. We found that the left superior temporal region showed greater activity during the retrieval of words learned in Aw than in Vw, whereas no region showed greater activity for the Vw condition versus the Aw condition (k > 10, p < .001, uncorrected. Taken together, the left superior temporal region could play an essential role in the retrieval process of auditory-word associations.

  15. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  16. Effects of temporal trial-by-trial cuing on early and late stages of auditory processing: evidence from event-related potentials.

    Science.gov (United States)

    Lampar, Alexa; Lange, Kathrin

    2011-08-01

    Temporal-cuing studies show faster responding to stimuli at an attended versus unattended time point. Whether the mechanisms involved in this temporal orienting of attention are located early or late in the processing stream has not been answered unequivocally. To address this question, we measured event-related potentials in two versions of an auditory temporal cuing task: Stimuli at the uncued time point either required a response (Experiment 1) or did not (Experiment 2). In both tasks, attention was oriented to the cued time point, but attention could be selectively focused on the cued time point only in Experiment 2. In both experiments, temporal orienting was associated with a late positivity in the timerange of the P3. An early enhancement in the timerange of the auditory N1 was observed only in Experiment 2. Thus, temporal attention improves auditory processing at early sensory levels only when it can be focused selectively.

  17. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  18. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  19. Identified auditory neurons in the cricket Gryllus rubens: temporal processing in calling song sensitive units.

    Science.gov (United States)

    Farris, Hamilton E; Mason, Andrew C; Hoy, Ronald R

    2004-07-01

    This study characterizes aspects of the anatomy and physiology of auditory receptors and certain interneurons in the cricket Gryllus rubens. We identified an 'L'-shaped ascending interneuron tuned to frequencies > 15 kHz (57 dB SPL threshold at 20 kHz). Also identified were two intrasegmental 'omega'-shaped interneurons that were broadly tuned to 3-65 kHz, with best sensitivity to frequencies of the male calling song (5 kHz, 52 dB SPL). The temporal sensitivity of units excited by calling song frequencies were measured using sinusoidally amplitude modulated stimuli that varied in both modulation rate and depth, parameters that vary with song propagation distance and the number of singing males. Omega cells responded like low-pass filters with a time constant of 42 ms. In contrast, receptors significantly coded modulation rates up to the maximum rate presented (85 Hz). Whereas omegas required approximately 65% modulation depth at 45 Hz (calling song AM) to elicit significant synchrony coding, receptors tolerated a approximately 50% reduction in modulation depth up to 85 Hz. These results suggest that omega cells in G. rubens might not play a role in detecting song modulation per se at increased distances from a singing male.

  20. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  1. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study.

    Science.gov (United States)

    van Zuijen, Titia L; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-10-18

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or without familial risk for dyslexia to investigate whether their auditory system was able to detect a temporal change in a tone pattern. The children were followed longitudinally and performed an intelligence- and language development test at ages 4 and 4.5 years. Literacy related skills were measured at the beginning of second grade, and word- and pseudo-word reading fluency were measured at the end of second grade. The EEG responses showed that control children could detect the temporal change as indicated by a mismatch response (MMR). The MMR was not observed in at-risk children. Furthermore, the fronto-central MMR amplitude correlated with preliterate language comprehension and with later word reading fluency, but not with phonological awareness. We conclude that temporal auditory processing differentiates young children at risk for dyslexia from controls and is a precursor of preliterate language comprehension and reading fluency.

  2. Processamento temporal, localização e fechamento auditivo em portadores de perda auditiva unilateral Temporal processing, localization and auditory closure in individuals with unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Regiane Nishihata

    2012-01-01

    , sound localization, and auditory closure, and to investigate possible associations with complaints of learning, communication and language difficulties in individuals with unilateral hearing loss. METHODS: Participants were 26 individuals with ages between 8 and 15 years, divided into two groups: Unilateral hearing loss group; and Normal hearing group. Each group was composed of 13 individuals, matched by gender, age and educational level. All subjects were submitted to anamnesis, peripheral hearing evaluation, and auditory processing evaluation through behavioral tests of sound localization, sequential memory, Random Detection Gap test, and speech-in-noise test. Nonparametric statistical tests were used to compare the groups, considering the presence or absence of hearing loss and the ear with hearing loss. RESULTS: Unilateral hearing loss started during preschool, and had unknown or identified etiologies, such as meningitis, traumas or mumps. Most individuals reported delays in speech, language and learning developments, especially those with hearing loss in the right ear. The group with hearing loss had worse responses in the abilities of temporal ordering and resolution, sound localization and auditory closure. Individuals with hearing loss in the left ear showed worse results than those with hearing loss in the right ear in all abilities, except in sound localization. CONCLUSION: The presence of unilateral hearing loss causes sound localization, auditory closure, temporal ordering and temporal resolution difficulties. Individuals with unilateral hearing loss in the right ear have more complaints than those with unilateral hearing loss in the left ear. Individuals with hearing loss in the left ear have more difficulties in auditory closure, temporal resolution, and temporal ordering.

  3. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  4. Calcium-dependent control of temporal processing in an auditory interneuron: a computational analysis.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2010-09-01

    Sensitivity to acoustic amplitude modulation in crickets differs between species and depends on carrier frequency (e.g., calling song vs. bat-ultrasound bands). Using computational tools, we explore how Ca(2+)-dependent mechanisms underlying selective attention can contribute to such differences in amplitude modulation sensitivity. For omega neuron 1 (ON1), selective attention is mediated by Ca(2+)-dependent feedback: [Ca(2+)](internal) increases with excitation, activating a Ca(2+)-dependent after-hyperpolarizing current. We propose that Ca(2+) removal rate and the size of the after-hyperpolarizing current can determine ON1's temporal modulation transfer function (TMTF). This is tested using a conductance-based simulation calibrated to responses in vivo. The model shows that parameter values that simulate responses to single pulses are sufficient in simulating responses to modulated stimuli: no special modulation-sensitive mechanisms are necessary, as high and low-pass portions of the TMTF are due to Ca(2+)-dependent spike frequency adaptation and post-synaptic potential depression, respectively. Furthermore, variance in the two biophysical parameters is sufficient to produce TMTFs of varying bandwidth, shifting amplitude modulation sensitivity like that in different species and in response to different carrier frequencies. Thus, the hypothesis that the size of after-hyperpolarizing current and the rate of Ca(2+) removal can affect amplitude modulation sensitivity is computationally validated.

  5. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    for the chirp-evoked ABRs indicated a relation to SRTs and the ability to process temporal fine structure. Overall, the results demonstrate the importance of low-frequency temporal processing for speech reception which can be affected even if pure-tone sensitivity is close to normal....

  6. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  7. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  8. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  9. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  10. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    , much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...

  11. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2

    OpenAIRE

    2016-01-01

    Introduction “Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action” (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution a...

  12. Auditory processing models

    DEFF Research Database (Denmark)

    Dau, Torsten

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook will pr...

  13. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  14. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  15. Temporal resolution in the hearing system and auditory evoked potentials

    DEFF Research Database (Denmark)

    Miller, Lee; Beedholm, Kristian

    2008-01-01

    3pAB5. Temporal resolution in the hearing system and auditory evoked potentials. Kristian Beedholm Institute of Biology,University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark, beedholm@mail.dk, Lee A. Miller Institute of Biology,University of Southern Denmark, Campusvej 55, 5230...... Odense M, Denmark, lee@biology.sdu.dkA popular type of investigation with auditory evoked potentials AEP consists of mapping the dependency of the envelope followingresponse to the AM frequency. This results in what is called the modulation rate transfer function MRTF. The physiologicalinterpretation...... of the MRTF is not straight forward, but is often used as a measure of the ability of the auditory system to encodetemporal changes. It is, however, shown here that the MRTF must depend on the waveform of the click-evoked AEP ceAEP, whichdoes not relate directly to temporal resolution. The theoretical...

  16. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    Science.gov (United States)

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  17. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  18. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    Science.gov (United States)

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation.

  19. Temporal pattern recognition based on instantaneous spike rate coding in a simple auditory system.

    Science.gov (United States)

    Nabatiyan, A; Poulet, J F A; de Polavieja, G G; Hedwig, B

    2003-10-01

    Auditory pattern recognition by the CNS is a fundamental process in acoustic communication. Because crickets communicate with stereotyped patterns of constant frequency syllables, they are established models to investigate the neuronal mechanisms of auditory pattern recognition. Here we provide evidence that for the neural processing of amplitude-modulated sounds, the instantaneous spike rate rather than the time-averaged neural activity is the appropriate coding principle by comparing both coding parameters in a thoracic interneuron (Omega neuron ON1) of the cricket (Gryllus bimaculatus) auditory system. When stimulated with different temporal sound patterns, the analysis of the instantaneous spike rate demonstrates that the neuron acts as a low-pass filter for syllable patterns. The instantaneous spike rate is low at high syllable rates, but prominent peaks in the instantaneous spike rate are generated as the syllable rate resembles that of the species-specific pattern. The occurrence and repetition rate of these peaks in the neuronal discharge are sufficient to explain temporal filtering in the cricket auditory pathway as they closely match the tuning of phonotactic behavior to different sound patterns. Thus temporal filtering or "pattern recognition" occurs at an early stage in the auditory pathway.

  20. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  1. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  2. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    Science.gov (United States)

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing.

  3. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  4. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  5. How modality specific is processing of auditory and visual rhythms?

    Science.gov (United States)

    Pasinski, Amanda C; McAuley, J Devin; Snyder, Joel S

    2016-02-01

    The present study used ERPs to test the extent to which temporal processing is modality specific or modality general. Participants were presented with auditory and visual temporal patterns that consisted of initial two- or three-event beginning patterns. This delineated a constant standard time interval, followed by a two-event ending pattern delineating a variable test interval. Participants judged whether they perceived the pattern as a whole to be speeding up or slowing down. The contingent negative variation (CNV), a negative potential reflecting temporal expectancy, showed a larger amplitude for the auditory modality compared to the visual modality but a high degree of similarity in scalp voltage patterns across modalities, suggesting that the CNV arises from modality-general processes. A late, memory-dependent positive component (P3) also showed similar patterns across modalities.

  6. Temporal coding by populations of auditory receptor neurons.

    Science.gov (United States)

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  7. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  8. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  9. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  10. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging.

  11. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  12. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  13. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  14. Peripheral auditory processing and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf

    One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even i....... Overall, this work provides insights into factors affecting auditory processing in listeners with impaired hearing and may have implications for future models of impaired auditory signal processing as well as advanced compensation strategies....... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS...

  15. Auditory Processing Disorder: School Psychologist Beware?

    Science.gov (United States)

    Lovett, Benjamin J.

    2011-01-01

    An increasing number of students are being diagnosed with auditory processing disorder (APD), but the school psychology literature has largely neglected this controversial condition. This article reviews research on APD, revealing substantial concerns with assessment tools and diagnostic practices, as well as insufficient research regarding many…

  16. The impact of a concurrent motor task on auditory and visual temporal discrimination tasks.

    Science.gov (United States)

    Mioni, Giovanna; Grassi, Massimo; Tarantino, Vincenza; Stablum, Franca; Grondin, Simon; Bisiacchi, Patrizia S

    2016-04-01

    Previous studies have shown the presence of an interference effect on temporal perception when participants are required to simultaneously execute a nontemporal task. Such interference likely has an attentional source. In the present work, a temporal discrimination task was performed alone or together with a self-paced finger-tapping task used as concurrent, nontemporal task. Temporal durations were presented in either the visual or the auditory modality, and two standard durations (500 and 1,500 ms) were used. For each experimental condition, the participant's threshold was estimated and analyzed. The mean Weber fraction was higher in the visual than in the auditory modality, but only for the subsecond duration, and it was higher with the 500-ms than with the 1,500-ms standard duration. Interestingly, the Weber fraction was significantly higher in the dual-task condition, but only in the visual modality. The results suggest that the processing of time in the auditory modality is likely automatic, but not in the visual modality.

  17. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  18. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  19. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  20. Pairing tone trains with vagus nerve stimulation induces temporal plasticity in auditory cortex.

    Science.gov (United States)

    Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P

    2012-01-01

    The selectivity of neurons in sensory cortex can be modified by pairing neuromodulator release with sensory stimulation. Repeated pairing of electrical stimulation of the cholinergic nucleus basalis, for example, induces input specific plasticity in primary auditory cortex (A1). Pairing nucleus basalis stimulation (NBS) with a tone increases the number of A1 neurons that respond to the paired tone frequency. Pairing NBS with fast or slow tone trains can respectively increase or decrease the ability of A1 neurons to respond to rapidly presented tones. Pairing vagus nerve stimulation (VNS) with a single tone alters spectral tuning in the same way as NBS-tone pairing without the need for brain surgery. In this study, we tested whether pairing VNS with tone trains can change the temporal response properties of A1 neurons. In naïve rats, A1 neurons respond strongly to tones repeated at rates up to 10 pulses per second (pps). Repeatedly pairing VNS with 15 pps tone trains increased the temporal following capacity of A1 neurons and repeatedly pairing VNS with 5 pps tone trains decreased the temporal following capacity of A1 neurons. Pairing VNS with tone trains did not alter the frequency selectivity or tonotopic organization of auditory cortex neurons. Since VNS is well tolerated by patients, VNS-tone train pairing represents a viable method to direct temporal plasticity in a variety of human conditions associated with temporal processing deficits.

  1. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...... aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input...

  2. Spectral features control temporal plasticity in auditory cortex.

    Science.gov (United States)

    Kilgard, M P; Pandya, P K; Vazquez, J L; Rathbun, D L; Engineer, N D; Moucha, R

    2001-01-01

    Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.

  3. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    the location of the lateral belt and parabelt with respect to gross anatomical landmarks. Architectonic criteria for the core, lateral belt, and parabelt were readily adapted from monkey to human. Additionally, we found evidence for an architectonic subdivision within the parabelt, present in both species......The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...... that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  4. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing. PMID:23626523

  5. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing.

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing.

  6. Infant Auditory Processing and Event-related Brain Oscillations

    Science.gov (United States)

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  7. Repetition suppression in auditory-motor regions to pitch and temporal structure in music.

    Science.gov (United States)

    Brown, Rachel M; Chen, Joyce L; Hollinger, Avrum; Penhune, Virginia B; Palmer, Caroline; Zatorre, Robert J

    2013-02-01

    Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.

  8. Processing of location and pattern changes of natural sounds in the human auditory cortex.

    Science.gov (United States)

    Altmann, Christian F; Bledowski, Christoph; Wibral, Michael; Kaiser, Jochen

    2007-04-15

    Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.

  9. Gradients and modulation of K(+ channels optimize temporal accuracy in networks of auditory neurons.

    Directory of Open Access Journals (Sweden)

    Leonard K Kaczmarek

    Full Text Available Accurate timing of action potentials is required for neurons in auditory brainstem nuclei to encode the frequency and phase of incoming sound stimuli. Many such neurons express "high threshold" Kv3-family channels that are required for firing at high rates (> -200 Hz. Kv3 channels are expressed in gradients along the medial-lateral tonotopic axis of the nuclei. Numerical simulations of auditory brainstem neurons were used to calculate the input-output relations of ensembles of 1-50 neurons, stimulated at rates between 100-1500 Hz. Individual neurons with different levels of potassium currents differ in their ability to follow specific rates of stimulation but all perform poorly when the stimulus rate is greater than the maximal firing rate of the neurons. The temporal accuracy of the combined synaptic output of an ensemble is, however, enhanced by the presence of gradients in Kv3 channel levels over that measured when neurons express uniform levels of channels. Surprisingly, at high rates of stimulation, temporal accuracy is also enhanced by the occurrence of random spontaneous activity, such as is normally observed in the absence of sound stimulation. For any pattern of stimulation, however, greatest accuracy is observed when, in the presence of spontaneous activity, the levels of potassium conductance in all of the neurons is adjusted to that found in the subset of neurons that respond better than their neighbors. This optimization of response by adjusting the K(+ conductance occurs for stimulus patterns containing either single and or multiple frequencies in the phase-locking range. The findings suggest that gradients of channel expression are required for normal auditory processing and that changes in levels of potassium currents across the nuclei, by mechanisms such as protein phosphorylation and rapid changes in channel synthesis, adapt the nuclei to the ongoing auditory environment.

  10. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  11. Modulation of auditory evoked responses to spectral and temporal changes by behavioral discrimination training

    Directory of Open Access Journals (Sweden)

    Okamoto Hidehiko

    2009-12-01

    Full Text Available Abstract Background Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency. Cortical evoked responses like N1 and mismatch negativity (MMN triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG. We investigated (i how the auditory cortex reacts to pitch difference (in carrier frequency and changes in temporal features (modulation frequency of AM tones and (ii how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex. Results The results showed that, additionally to an improvement of the behavioral discrimination performance, discrimination training of carrier frequency changes significantly modulates the MMN and N1 response amplitudes after the training. This process was accompanied by an attention switch to the deviant stimulus after the training procedure identified by the occurrence of a P3a component. In contrast, the training in discrimination of modulation frequency was not sufficient to improve the behavioral discrimination performance and to alternate the cortical response (MMN to the modulation frequency change. The N1 amplitude, however, showed significant increase after and one week after the training. Similar to the training in carrier frequency discrimination, a long lasting

  12. Near-Term Fetuses Process Temporal Features of Speech

    Science.gov (United States)

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  13. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  14. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  15. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features

    Directory of Open Access Journals (Sweden)

    Jancke Lutz

    2007-12-01

    Full Text Available Abstract Background Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT is considered an important cue that is cardinal for phonetic perception. Methods In the present study, we recorded and compared scalp auditory evoked potentials (AEP in response to consonant-vowel-syllables (CV with varying voice-onset-times (VOT and non-speech analogues with varying noise-onset-time (NOT. In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. Results Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds. Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. Conclusion The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal

  16. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance....... The purpose of the present thesis is to develop a computational auditory processing model that accounts for a large variety of experimental data on CMR, in order to obtain a more thorough understanding of the basic processing principles underlying the processing of across-frequency modulations. The second...

  17. Resolução temporal auditiva em idosos Auditory temporal resolution in elderly people

    Directory of Open Access Journals (Sweden)

    Flávia Duarte Liporaci

    2010-12-01

    Full Text Available OBJETIVO: Avaliar o processamento auditivo em idosos por meio do teste de resolução temporal Gaps in Noise e verificar se a presença de perda auditiva influencia no desempenho nesse teste. MÉTODOS: Sessenta e cinco ouvintes idosos, entre 60 e 79 anos, foram avaliados por meio do teste Gaps In Noise. Para seleção da amostra foram realizados: anamnese, mini-exame do estado mental e avaliação audiológica básica. Os participantes foram alocados e estudados em um grupo único e posteriormente divididos em três grupos segundo os resultados audiométricos nas frequências de 500 Hz, 1, 2, 3, 4 e 6 kHz. Assim, classificou-se o G1 com audição normal, o G2 com perda auditiva de grau leve e o G3 com perda auditiva de grau moderado. RESULTADOS: Em toda a amostra, as médias de limiar de detecção de gap e de porcentagem de acertos foram de 8,1 ms e 52,6% para a orelha direita e de 8,2 ms e 52,2% para a orelha esquerda. No G1, estas medidas foram de 7,3 ms e 57,6% para a orelha direita e de 7,7 ms e 55,8% para a orelha esquerda. No G2, estas medidas foram de 8,2 ms e 52,5% para a orelha direita e de 7,9 ms e 53,2% para a orelha esquerda. No G3, estas medidas foram de 9,2 ms e 45,2% para as orelhas direita e esquerda. CONCLUSÃO: A presença de perda auditiva elevou os limiares de detecção de gap e diminuiu a porcentagem de acertos no teste Gaps In Noise.PURPOSE: To assess the auditory processing of elderly patients using the temporal resolution Gaps-in-Noise test, and to verify if the presence of hearing loss influences the performance on this test. METHODS: Sixty-five elderly listeners, with ages between 60 and 79 years, were assessed with the Gaps-in-Noise test. To meet the inclusion criteria, the following procedures were carried out: anamnesis, mini-mental state examination, and basic audiological evaluation. The participants were allocated and studied as a group, and then were divided into three groups, according to audiometric results

  18. Auditory Processing Theories of Language Disorders: Past, Present, and Future

    Science.gov (United States)

    Miller, Carol A.

    2011-01-01

    Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…

  19. Strategy choice mediates the link between auditory processing and spelling.

    Science.gov (United States)

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  20. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  1. Temporal expectation and attention jointly modulate auditory oscillatory activity in the beta band.

    Science.gov (United States)

    Todorovic, Ana; Schoffelen, Jan-Mathijs; van Ede, Freek; Maris, Eric; de Lange, Floris P

    2015-01-01

    The neural response to a stimulus is influenced by endogenous factors such as expectation and attention. Current research suggests that expectation and attention exert their effects in opposite directions, where expectation decreases neural activity in sensory areas, while attention increases it. However, expectation and attention are usually studied either in isolation or confounded with each other. A recent study suggests that expectation and attention may act jointly on sensory processing, by increasing the neural response to expected events when they are attended, but decreasing it when they are unattended. Here we test this hypothesis in an auditory temporal cueing paradigm using magnetoencephalography in humans. In our study participants attended to, or away from, tones that could arrive at expected or unexpected moments. We found a decrease in auditory beta band synchrony to expected (versus unexpected) tones if they were unattended, but no difference if they were attended. Modulations in beta power were already evident prior to the expected onset times of the tones. These findings suggest that expectation and attention jointly modulate sensory processing.

  2. Effects of deafness and cochlear implant use on temporal response characteristics in cat primary auditory cortex.

    Science.gov (United States)

    Fallon, James B; Shepherd, Robert K; Nayagam, David A X; Wise, Andrew K; Heffer, Leon F; Landry, Thomas G; Irvine, Dexter R F

    2014-09-01

    We have previously shown that neonatal deafness of 7-13 months duration leads to loss of cochleotopy in the primary auditory cortex (AI) that can be reversed by cochlear implant use. Here we describe the effects of a similar duration of deafness and cochlear implant use on temporal processing. Specifically, we compared the temporal resolution of neurons in AI of young adult normal-hearing cats that were acutely deafened and implanted immediately prior to recording with that in three groups of neonatally deafened cats. One group of neonatally deafened cats received no chronic stimulation. The other two groups received up to 8 months of either low- or high-rate (50 or 500 pulses per second per electrode, respectively) stimulation from a clinical cochlear implant, initiated at 10 weeks of age. Deafness of 7-13 months duration had no effect on the duration of post-onset response suppression, latency, latency jitter, or the stimulus repetition rate at which units responded maximally (best repetition rate), but resulted in a statistically significant reduction in the ability of units to respond to every stimulus in a train (maximum following rate). None of the temporal response characteristics of the low-rate group differed from those in acutely deafened controls. In contrast, high-rate stimulation had diverse effects: it resulted in decreased suppression duration, longer latency and greater jitter relative to all other groups, and an increase in best repetition rate and cut-off rate relative to acutely deafened controls. The minimal effects of moderate-duration deafness on temporal processing in the present study are in contrast to its previously-reported pronounced effects on cochleotopy. Much longer periods of deafness have been reported to result in significant changes in temporal processing, in accord with the fact that duration of deafness is a major factor influencing outcome in human cochlear implantees.

  3. Auditory Processing Learning Disability, Suicidal Ideation, and Transformational Faith

    Science.gov (United States)

    Bailey, Frank S.; Yocum, Russell G.

    2015-01-01

    The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…

  4. Processing of communication calls in Guinea pig auditory cortex.

    Directory of Open Access Journals (Sweden)

    Jasmine M S Grimsley

    Full Text Available Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB, the transitional belt area (T that is ventral to AI and the small area (area S that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.

  5. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  6. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  7. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  8. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  9. Morphometrical Study of the Temporal Bone and Auditory Ossicles in Guinea Pig

    Directory of Open Access Journals (Sweden)

    Ahmadali Mohammadpour

    2011-03-01

    Full Text Available In this research, anatomical descriptions of the structure of the temporal bone and auditory ossicles have been performed based on dissection of ten guinea pigs. The results showed that, in guinea pig temporal bone was similar to other animals and had three parts; squamous, tympanic and petrous .The tympanic part was much better developed and consisted of oval shaped tympanic bulla with many recesses in tympanic cavity. The auditory ossicles of guinea pig concluded of three small bones; malleus, incus and stapes but the head of the malleus and the body of incus were fused and forming a malleoincudal complex. The average of morphometric parameters showed that the malleus was 3.53 ± 0.22 mm in total length. In addition to head and handle, the malleus had two distinct process; lateral and muscular. The incus had a total length 1.23 ± 0.02mm. It had long and short crus although the long crus was developed better than short crus. The lenticular bone was a round bone that articulated with the long crus of incus. The stapes had a total length 1.38 ± 0.04mm. The anterior crus(0.86 ± 0.08mm was larger than posterior crus (0.76 ± 0.08mm. It is concluded that, in the guinea pig, the malleus and the incus are fused, forming a junction called incus-malleus, while in the other animals these are separate bones. The stapes is larger and has a triangular shape and the anterior and posterior crus are thicker than other rodents. Therefore, for otological studies, the guinea pig is a good lab animal.

  10. Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia

    Science.gov (United States)

    Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.

    2003-01-01

    The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…

  11. Spectro-Temporal Methods in Primary Auditory Cortex

    Science.gov (United States)

    2006-01-01

    LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE... Funcion Spike-triggered averaging of the spectro-temporal envelope directly gives a similar spectro-temporal response field to the spike- triggered

  12. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  13. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  14. Auditory N1 reveals planning and monitoring processes during music performance.

    Science.gov (United States)

    Mathias, Brian; Gehring, William J; Palmer, Caroline

    2017-02-01

    The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks.

  15. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  16. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  17. Polymodal information processing via temporal cortex Area 37 modeling

    Science.gov (United States)

    Peterson, James K.

    2004-04-01

    A model of biological information processing is presented that consists of auditory and visual subsystems linked to temporal cortex and limbic processing. An biologically based algorithm is presented for the fusing of information sources of fundamentally different modalities. Proof of this concept is outlined by a system which combines auditory input (musical sequences) and visual input (illustrations such as paintings) via a model of cortex processing for Area 37 of the temporal cortex. The training data can be used to construct a connectionist model whose biological relevance is suspect yet is still useful and a biologically based model which achieves the same input to output map through biologically relevant means. The constructed models are able to create from a set of auditory and visual clues a combined musical/ illustration output which shares many of the properties of the original training data. These algorithms are not dependent on these particular auditory/ visual modalities and hence are of general use in the intelligent computation of outputs that require sensor fusion.

  18. Altered temporal dynamics of neural adaptation in the aging human auditory cortex.

    Science.gov (United States)

    Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas

    2016-09-01

    Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.

  19. Sensitivity of cochlear nucleus neurons to spatio-temporal changes in auditory nerve activity.

    Science.gov (United States)

    Wang, Grace I; Delgutte, Bertrand

    2012-12-01

    The spatio-temporal pattern of auditory nerve (AN) activity, representing the relative timing of spikes across the tonotopic axis, contains cues to perceptual features of sounds such as pitch, loudness, timbre, and spatial location. These spatio-temporal cues may be extracted by neurons in the cochlear nucleus (CN) that are sensitive to relative timing of inputs from AN fibers innervating different cochlear regions. One possible mechanism for this extraction is "cross-frequency" coincidence detection (CD), in which a central neuron converts the degree of coincidence across the tonotopic axis into a rate code by preferentially firing when its AN inputs discharge in synchrony. We used Huffman stimuli (Carney LH. J Neurophysiol 64: 437-456, 1990), which have a flat power spectrum but differ in their phase spectra, to systematically manipulate relative timing of spikes across tonotopically neighboring AN fibers without changing overall firing rates. We compared responses of CN units to Huffman stimuli with responses of model CD cells operating on spatio-temporal patterns of AN activity derived from measured responses of AN fibers with the principle of cochlear scaling invariance. We used the maximum likelihood method to determine the CD model cell parameters most likely to produce the measured CN unit responses, and thereby could distinguish units behaving like cross-frequency CD cells from those consistent with same-frequency CD (in which all inputs would originate from the same tonotopic location). We find that certain CN unit types, especially those associated with globular bushy cells, have responses consistent with cross-frequency CD cells. A possible functional role of a cross-frequency CD mechanism in these CN units is to increase the dynamic range of binaural neurons that process cues for sound localization.

  20. An auditory illusion of infinite tempo change based on multiple temporal levels.

    Directory of Open Access Journals (Sweden)

    Guy Madison

    Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.

  1. Effect of auditory training on the middle latency response in children with (central) auditory processing disorder.

    Science.gov (United States)

    Schochat, E; Musiek, F E; Alonso, R; Ogata, J

    2010-08-01

    The purpose of this study was to determine the middle latency response (MLR) characteristics (latency and amplitude) in children with (central) auditory processing disorder [(C)APD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (C)APD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (C)APD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (C)APD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 microV (mean), 0.39 (SD--standard deviation) for the (C)APD group and 1.18 microV (mean), 0.65 (SD) for the control group; C3-A2, 0.69 microV (mean), 0.31 (SD) for the (C)APD group and 1.00 microV (mean), 0.46 (SD) for the control group]. After training, the MLR C3-A1 [1.59 microV (mean), 0.82 (SD)] and C3-A2 [1.24 microV (mean), 0.73 (SD)] wave amplitudes of the (C)APD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (C)APD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (C)APD.

  2. Effect of auditory training on the middle latency response in children with (central auditory processing disorder

    Directory of Open Access Journals (Sweden)

    E. Schochat

    2010-08-01

    Full Text Available The purpose of this study was to determine the middle latency response (MLR characteristics (latency and amplitude in children with (central auditory processing disorder [(CAPD], categorized as such by their performance on the central auditory test battery, and the effects of these characteristics after auditory training. Thirty children with (CAPD, 8 to 14 years of age, were tested using the MLR-evoked potential. This group was then enrolled in an 8-week auditory training program and then retested at the completion of the program. A control group of 22 children without (CAPD, composed of relatives and acquaintances of those involved in the research, underwent the same testing at equal time intervals, but were not enrolled in the auditory training program. Before auditory training, MLR results for the (CAPD group exhibited lower C3-A1 and C3-A2 wave amplitudes in comparison to the control group [C3-A1, 0.84 µV (mean, 0.39 (SD - standard deviation for the (CAPD group and 1.18 µV (mean, 0.65 (SD for the control group; C3-A2, 0.69 µV (mean, 0.31 (SD for the (CAPD group and 1.00 µV (mean, 0.46 (SD for the control group]. After training, the MLR C3-A1 [1.59 µV (mean, 0.82 (SD] and C3-A2 [1.24 µV (mean, 0.73 (SD] wave amplitudes of the (CAPD group significantly increased, so that there was no longer a significant difference in MLR amplitude between (CAPD and control groups. These findings suggest progress in the use of electrophysiological measurements for the diagnosis and treatment of (CAPD.

  3. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  4. The Effect of Temporal Context on the Sustained Pitch Response in Human Auditory Cortex

    OpenAIRE

    Gutschalk, Alexander; Patterson, Roy D.; Scherg, Michael; Uppenkamp, Stefan; Rupp, André

    2006-01-01

    Recent neuroimaging studies have shown that activity in lateral Heschl’s gyrus covaries specifically with the strength of musical pitch. Pitch strength is important for the perceptual distinctiveness of an acoustic event, but in complex auditory scenes, the distinctiveness of an event also depends on its context. In this magnetoencephalography study, we evaluate how temporal context influences the sustained pitch response (SPR) in lateral Heschl’s gyrus. In 2 sequences of continuously alterna...

  5. Differences in auditory processing of words and pseudowords: an fMRI study.

    Science.gov (United States)

    Newman, S D; Twieg, D

    2001-09-01

    Although there has been great interest in the neuroanatomical basis of reading, little attention has been focused on auditory language processing. The purpose of this study was to examine the differential neuroanatomical response to the auditory processing of real words and pseudowords. Eight healthy right-handed participants performed two phoneme monitoring tasks (one with real word stimuli and one with pseudowords) during a functional magnetic resonance imaging (fMRI) scan with a 4.1 T system. Both tasks activated the inferior frontal gyrus (IFG), the posterior superior temporal gyrus (pSTG) and the inferior parietal lobe (IPL). Pseudoword processing elicited significantly more activation within the posterior cortical regions compared with real word processing. Previous reading studies have suggested that this increase is due to an increased demand on the lexical access system. The left inferior frontal gyrus, on the other hand, did not reveal a significant difference in the amount of activation as a function of stimulus type. The lack of a differential response in IFG for auditory processing supports its hypothesized involvement in grapheme to phoneme conversion processes. These results are consistent with those from previous neuroimaging reading studies and emphasize the utility of examining both input modalities (e.g., visual or auditory) to compose a more complete picture of the language network.

  6. Temporal Processing Dysfunction in Schizophrenia

    Science.gov (United States)

    Carroll, Christine A.; Boggs, Jennifer; O'Donnell, Brian F.; Shekhar, Anantha; Hetrick, William P.

    2008-01-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the…

  7. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene.

  8. Temporal processes involved in simultaneous reflection masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg

    2006-01-01

    reflection delays and enhances the test reflection for large delays. Employing a 200-ms-long broadband noise burst as input signal, the critical delay separating these two binaural phenomena was found to be 7–10 ms. It was suggested that the critical delay refers to a temporal window that is employed......, resulting in a critical delay of about 2–3 ms for 20-ms-long stimuli. Hence, for very short stimuli the temporal window or critical delay exhibits values similar to the auditory temporal resolution as, for instance, observed in gap-detection tasks. It is suggested that the larger critical delay observed...

  9. Are Auditory and Visual Processing Deficits Related to Developmental Dyslexia?

    Science.gov (United States)

    Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno

    2012-01-01

    The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…

  10. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  11. Matching Pursuit Analysis of Auditory Receptive Fields' Spectro-Temporal Properties

    Science.gov (United States)

    Bach, Jörg-Hendrik; Kollmeier, Birger; Anemüller, Jörn

    2017-01-01

    Gabor filters have long been proposed as models for spectro-temporal receptive fields (STRFs), with their specific spectral and temporal rate of modulation qualitatively replicating characteristics of STRF filters estimated from responses to auditory stimuli in physiological data. The present study builds on the Gabor-STRF model by proposing a methodology to quantitatively decompose STRFs into a set of optimally matched Gabor filters through matching pursuit, and by quantitatively evaluating spectral and temporal characteristics of STRFs in terms of the derived optimal Gabor-parameters. To summarize a neuron's spectro-temporal characteristics, we introduce a measure for the “diagonality,” i.e., the extent to which an STRF exhibits spectro-temporal transients which cannot be factorized into a product of a spectral and a temporal modulation. With this methodology, it is shown that approximately half of 52 analyzed zebra finch STRFs can each be well approximated by a single Gabor or a linear combination of two Gabor filters. Moreover, the dominant Gabor functions tend to be oriented either in the spectral or in the temporal direction, with truly “diagonal” Gabor functions rarely being necessary for reconstruction of an STRF's main characteristics. As a toy example for the applicability of STRF and Gabor-STRF filters to auditory detection tasks, we use STRF filters as features in an automatic event detection task and compare them to idealized Gabor filters and mel-frequency cepstral coefficients (MFCCs). STRFs classify a set of six everyday sounds with an accuracy similar to reference Gabor features (94% recognition rate). Spectro-temporal STRF and Gabor features outperform reference spectral MFCCs in quiet and in low noise conditions (down to 0 dB signal to noise ratio). PMID:28232791

  12. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine...... the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using...... in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis...

  13. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  14. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  15. Bilateral collicular interaction: modulation of auditory signal processing in frequency domain.

    Science.gov (United States)

    Cheng, L; Mei, H-X; Tang, J; Fu, Z-Y; Jen, P H-S; Chen, Q-C

    2013-04-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from a variety of lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in the modulation of frequency-domain signal processing of mice using electrophysiological recording and focal electrical stimulation. Focal electrical stimulation of neurons in one IC produces widespread inhibition and focused facilitation of responses of neurons in the other IC. This bilateral collicular interaction decreases the response magnitude and lengthens the response latency of inhibited IC neurons but produces an opposite effect on the response of facilitated IC neurons. In the frequency domain, the focal electrical stimulation of one IC sharpens or expands the frequency tuning curves (FTCs) of neurons in the other IC to improve frequency sensitivity and the frequency response range. The focal electrical stimulation also produces a shift in the best frequency (BF) of modulated IC (ICMdu) neurons toward that of electrically stimulated IC (ICES) neurons. The degree of bilateral collicular interaction is dependent upon the difference in the BF between the ICES neurons and ICMdu neurons. These data suggest that bilateral collicular interaction is a part of dynamic acoustic signal processing that adjusts and improves signal processing as well as reorganizes collicular representation of signal parameters according to the acoustic experience.

  16. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise.

  17. Functional hemispheric specialization in processing phonemic and prosodic auditory changes in neonates

    Directory of Open Access Journals (Sweden)

    Takeshi eArimitsu

    2011-09-01

    Full Text Available This study focuses on the early cerebral base of speech perception by examining functional lateralization in neonates for processing segmental and suprasegmental features of speech. For this purpose, auditory evoked responses of full-term neonates to phonemic and prosodic contrasts were measured in their temporal area and part of the frontal and parietal areas using near-infrared spectroscopy (NIRS. Stimuli used here were phonemic contrast /itta/ and /itte/ and prosodic contrast of declarative and interrogative forms /itta/ and /itta?/. The results showed clear hemodynamic responses to both phonemic and prosodic changes in the temporal areas and part of the parietal and frontal regions. In particular, significantly higher hemoglobin (Hb changes were observed for the prosodic change in the right temporal area than for that in the left one, whereas Hb responses to the vowel change were similarly elicited in bilateral temporal areas. However, Hb responses to the vowel contrast were asymmetrical in the parietal area (around supra marginal gyrus, with stronger activation in the left. These results suggest a specialized function of the right hemisphere in prosody processing, which is already present in neonates. The parietal activities during phonemic processing were discussed in relation to verbal-auditory short-term memory. On the basis of this study and previous studies on older infants, the developmental process of functional lateralization from birth to 2 years of age for vowel and prosody was summarized.

  18. Can Children with (Central) Auditory Processing Disorders Ignore Irrelevant Sounds?

    Science.gov (United States)

    Elliott, Emily M.; Bhagat, Shaum P.; Lynn, Sharon D.

    2007-01-01

    This study investigated the effects of irrelevant sounds on the serial recall performance of visually presented digits in a sample of children diagnosed with (central) auditory processing disorders [(C)APD] and age- and span-matched control groups. The irrelevant sounds used were samples of tones and speech. Memory performance was significantly…

  19. Characteristics of Auditory Processing Disorders : A Systematic Review

    NARCIS (Netherlands)

    de Wit, Ellen; Visser-Bochane, Margot I; Steenbergen, Bert; van Dijk, Pim; van der Schans, Cees P; Luinge, Margreet R

    2016-01-01

    Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit

  20. Characteristics of auditory processing disorders: A systematic review

    NARCIS (Netherlands)

    Wit, E. de; Visser-Bochane, M.I.; Steenbergen, B.; Dijk, P. van; Schans, C.P. van der; Luinge, M.R.

    2016-01-01

    Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit

  1. Changes across time in the temporal responses of auditory nerve fibers stimulated by electric pulse trains.

    Science.gov (United States)

    Miller, Charles A; Hu, Ning; Zhang, Fawen; Robinson, Barbara K; Abbas, Paul J

    2008-03-01

    Most auditory prostheses use modulated electric pulse trains to excite the auditory nerve. There are, however, scant data regarding the effects of pulse trains on auditory nerve fiber (ANF) responses across the duration of such stimuli. We examined how temporal ANF properties changed with level and pulse rate across 300-ms pulse trains. Four measures were examined: (1) first-spike latency, (2) interspike interval (ISI), (3) vector strength (VS), and (4) Fano factor (FF, an index of the temporal variability of responsiveness). Data were obtained using 250-, 1,000-, and 5,000-pulse/s stimuli. First-spike latency decreased with increasing spike rate, with relatively small decrements observed for 5,000-pulse/s trains, presumably reflecting integration. ISIs to low-rate (250 pulse/s) trains were strongly locked to the stimuli, whereas ISIs evoked with 5,000-pulse/s trains were dominated by refractory and adaptation effects. Across time, VS decreased for low-rate trains but not for 5,000-pulse/s stimuli. At relatively high spike rates (>200 spike/s), VS values for 5,000-pulse/s trains were lower than those obtained with 250-pulse/s stimuli (even after accounting for the smaller periods of the 5,000-pulse/s stimuli), indicating a desynchronizing effect of high-rate stimuli. FF measures also indicated a desynchronizing effect of high-rate trains. Across a wide range of response rates, FF underwent relatively fast increases (i.e., within 100 ms) for 5,000-pulse/s stimuli. With a few exceptions, ISI, VS, and FF measures approached asymptotic values within the 300-ms duration of the low- and high-rate trains. These findings may have implications for designs of cochlear implant stimulus protocols, understanding electrically evoked compound action potentials, and interpretation of neural measures obtained at central nuclei, which depend on understanding the output of the auditory nerve.

  2. Proprioceptive cues modulate further processing of spatially congruent auditory information. a high-density EEG study.

    Science.gov (United States)

    Simon-Dack, S L; Teder-Sälejärvi, W A

    2008-07-18

    Multisensory integration and interaction occur when bimodal stimuli are presented as either spatially congruent or incongruent, but temporally coincident. We investigated whether proprioceptive cues interact with auditory attention to one of two sound sources in free-field. The participant's task was to attend to either the left or right speaker and to respond to occasional increased-bandwidth targets via a footswitch. We recorded high-density EEG in three experimental conditions: the participants either held the speakers in their hands (Hold), reached out close to them (Reach), or had their hands in their lap (Lap). In the last two conditions, the auditory event-related potentials (ERPs) revealed a prominent negativity around 200 ms post-stimulus (N2 wave) over fronto-central areas, which is a reliable index of further processing of spatial stimulus features in free-field. The N2 wave was markedly attenuated in the 'Hold' condition, which suggests that proprioceptive cues apparently solidify spatial information computed by the auditory system, in so doing alleviating the need for further processing of spatial coordinates solely based on auditory information.

  3. Processing of sounds by population spikes in a model of primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Alex Loebel

    2007-10-01

    Full Text Available We propose a model of the primary auditory cortex (A1, in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex.We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing.

  4. Early neural disruption and auditory processing outcomes in rodent models: Implications for developmental language disability

    Directory of Open Access Journals (Sweden)

    Roslyn Holly Fitch

    2013-10-01

    Full Text Available Most researchers in the field of neural plasticity are familiar with the Kennard Principle," which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood. As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents aspects of human sensory processing that may correlate – both developmentally and functionally – with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic injuries (similar to those seen in premature infants and term infants with birth complications led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human "term," but only transient deficits (undetectable in adulthood when induced in a "preterm" window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations. Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in

  5. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  6. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...

  7. A neurophysiological deficit in early visual processing in schizophrenia patients with auditory hallucinations.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E; Kroppmann, Christopher J; Alschuler, Daniel M; Fekri, Shiva; Gil, Roberto; Jarskog, L Fredrik; Harkavy-Friedman, Jill M; Bruder, Gerard E

    2012-09-01

    Existing 67-channel event-related potentials, obtained during recognition and working memory paradigms with words or faces, were used to examine early visual processing in schizophrenia patients prone to auditory hallucinations (AH, n = 26) or not (NH, n = 49) and healthy controls (HC, n = 46). Current source density (CSD) transforms revealed distinct, strongly left- (words) or right-lateralized (faces; N170) inferior-temporal N1 sinks (150 ms) in each group. N1 was quantified by temporal PCA of peak-adjusted CSDs. For words and faces in both paradigms, N1 was substantially reduced in AH compared with NH and HC, who did not differ from each other. The difference in N1 between AH and NH was not due to overall symptom severity or performance accuracy, with both groups showing comparable memory deficits. Our findings extend prior reports of reduced auditory N1 in AH, suggesting a broader early perceptual integration deficit that is not limited to the auditory modality.

  8. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  9. ERPs reveal the temporal dynamics of auditory word recognition in specific language impairment.

    Science.gov (United States)

    Malins, Jeffrey G; Desroches, Amy S; Robertson, Erin K; Newman, Randy Lynn; Archibald, Lisa M D; Joanisse, Marc F

    2013-07-01

    We used event-related potentials (ERPs) to compare auditory word recognition in children with specific language impairment (SLI group; N=14) to a group of typically developing children (TD group; N=14). Subjects were presented with pictures of items and heard auditory words that either matched or mismatched the pictures. Mismatches overlapped expected words in word-onset (cohort mismatches; see: DOLL, hear: dog), rhyme (CONE -bone), or were unrelated (SHELL -mug). In match trials, the SLI group showed a different pattern of N100 responses to auditory stimuli compared to the TD group, indicative of early auditory processing differences in SLI. However, the phonological mapping negativity (PMN) response to mismatching items was comparable across groups, suggesting that just like TD children, children with SLI are capable of establishing phonological expectations and detecting violations of these expectations in an online fashion. Perhaps most importantly, we observed a lack of attenuation of the N400 for rhyming words in the SLI group, which suggests that either these children were not as sensitive to rhyme similarity as their typically developing peers, or did not suppress lexical alternatives to the same extent. These findings help shed light on the underlying deficits responsible for SLI.

  10. Congenital external auditory canal atresia and stenosis: temporal bone CT findings

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Hoon; Kim, Bum Soo; Jung, So Lyung; Kim, Young Joo; Chun, Ho Jong; Choi, Kyu Ho; Park, Shi Nae [College of Medicine, Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2002-04-01

    To determine the computed tomographic (CT) findings of atresia and stenosis of the external auditory canal (EAC), and to describe associated abnormalities in surrounding structures. We retrospectively reviewed the axial and coronal CT images of the temporal bone in 15 patients (M:F=8:7;mean age, 15.8 years) with 16 cases of EAC atresia (unilateral n=11, bilateral n=1) and EAC stenosis (unilateral n=3). Associated abnormalities of the EAC, tympanic cavity, ossicles, mastoid air cells, eustachian tube, facial nerve course, mandibular condyle and condylar fossa, sigmoid sinus and jugular bulb, and the base of the middle cranial fossa were evaluated. Thirteen cases of bony EAC atresia (one bilateral), with an atretic bony plate, were noted, and one case of unilateral membranous atresia, in which a soft tissue the EAC. A unilateral lesion occurred more frequently on the right temporal bone (n=8, 73%). Associated abnormalities included a small tympanic cavity (n=8, 62%), decreased mastoid pneumatization (n=8, 62%), displacement of the mandibular condyle and the posterior wall of the condylar fossa (n=7, 54%), dilatation of the Eustachian tube (n=7, 54%), and inferior displacement of the temporal fossa base (n=8, 62%). Abnormalities of ossicles were noted in the malleolus (n=12, 92%), incus (n=10, 77%) and stapes (n=6, 46%). The course of the facial nerve was abnormal in four cases, and abnormality of the auditory canal was noted in one. Among three cases of EAC stenosis, ossicular aplasia was observed in one, and in another the location of the mandibular condyle and condylar fossa was abnormal. In the remaining case there was no associated abnormality. Atresia of the EAC is frequently accompanied by abnormalities of the middle ear cavity, ossicles, and adjacent structures other than the inner ear. For patients with atresia and stenosis of this canal, CT of the temporal bone is essentially helpful in evaluating these associated abnormalities.

  11. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  12. Distributed Processing and Cortical Specialization for Speech and Environmental Sounds in Human Temporal Cortex

    Science.gov (United States)

    Leech, Robert; Saygin, Ayse Pinar

    2011-01-01

    Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…

  13. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  14. Stability of Auditory Discrimination and Novelty Processing in Physiological Aging

    Directory of Open Access Journals (Sweden)

    Alberto Raggi

    2013-01-01

    Full Text Available Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  15. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  16. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  17. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  18. Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP).

    Science.gov (United States)

    Wagner, Monica; Roychoudhury, Arindam; Campanelli, Luca; Shafer, Valerie L; Martin, Brett; Steinschneider, Mitchell

    2016-02-12

    The purpose of the study was to determine whether P1-N1-P2 and T-complex morphology reflect spectro-temporal features within spoken words that approximate the natural variation of a speaker and whether waveform morphology is reliable at group and individual levels, necessary for probing auditory deficits. The P1-N1-P2 and T-complex to the syllables /pət/ and /sət/ within 70 natural word productions each were examined. EEG was recorded while participants heard nonsense word pairs and performed a syllable identification task to the second word in the pairs. Single trial auditory evoked potentials (AEP) to the first words were analyzed. Results found P1-N1-P2 and T-complex to reflect spectral and temporal feature processing. Also, results identified preliminary benchmarks for single trial response variability for individual subjects for sensory processing between 50 and 600ms. P1-N1-P2 and T-complex, at least at group level, may serve as phenotypic signatures to identify deficits in spectro-temporal feature recognition and to determine area of deficit, the superior temporal plane or lateral superior temporal gyrus.

  19. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation

    Science.gov (United States)

    Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  20. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  1. Temporal Multisensory Processing and its Relationship to Autistic Functioning

    Directory of Open Access Journals (Sweden)

    Leslie D Kwakye

    2011-10-01

    Full Text Available Autism spectrum disorders (ASD form a continuum of neurodevelopmental disorders characterized by deficits in communication and reciprocal social interaction, repetitive behaviors, and restricted interests. Sensory disturbances are also frequently reported in clinical and autobiographical accounts. However, few empirical studies have characterized the fundamental features of sensory and multisensory processing in ASD. Recently published studies have shown that children with ASD are able to integrate low-level multisensory stimuli, but do so over an enlarged temporal window when compared with typically developing (TD children. The current study sought to expand upon these previous findings by examining differences in the temporal processing of low-level multisensory stimuli in high-functioning (HFA and low-functioning (LFA children with ASD in the context of a simple reaction time task. Contrary to these previous findings, children with both HFA and LFA showed smaller gains in performance under multisensory (ie, combined visual-auditory conditions when compared with their TD peers. Additionally, the pattern of performance gains as a function of SOA was similar across groups, suggesting similarities in the temporal processing of these cues that run counter to previous studies that have shown an enlarged “temporal window.” These findings add complexity to our understanding of the multisensory processing of low-level stimuli in ASD and may hold promise for the development of more sensitive diagnostic measures and improved remediation strategies in autism.

  2. Bilateral Collicular Interaction: Modulation of Auditory Signal Processing in Amplitude Domain

    Science.gov (United States)

    Fu, Zi-Ying; Wang, Xin; Jen, Philip H.-S.; Chen, Qi-Cai

    2012-01-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from many lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and from the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in modulating amplitude-domain signal processing using electrophysiological recording, acoustic and focal electrical stimulation. Focal electrical stimulation of one (ipsilateral) IC produces widespread inhibition (61.6%) and focused facilitation (9.1%) of responses of neurons in the other (contralateral) IC, while 29.3% of the neurons were not affected. Bilateral collicular interaction produces a decrease in the response magnitude and an increase in the response latency of inhibited IC neurons but produces opposite effects on the response of facilitated IC neurons. These two groups of neurons are not separately located and are tonotopically organized within the IC. The modulation effect is most effective at low sound level and is dependent upon the interval between the acoustic and electric stimuli. The focal electrical stimulation of the ipsilateral IC compresses or expands the rate-level functions of contralateral IC neurons. The focal electrical stimulation also produces a shift in the minimum threshold and dynamic range of contralateral IC neurons for as long as 150 minutes. The degree of bilateral collicular interaction is dependent upon the difference in the best frequency between the electrically stimulated IC neurons and modulated IC neurons. These data suggest that bilateral collicular interaction mainly changes the ratio between excitation and inhibition during signal processing so as to sharpen the amplitude sensitivity of IC neurons. Bilateral interaction may be also involved in acoustic

  3. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    Science.gov (United States)

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  4. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  5. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  6. Temporal Resolution of ChR2 and Chronos in an Optogenetic-based Auditory Brainstem Implant Model: Implications for the Development and Application of Auditory Opsins

    Science.gov (United States)

    Hight, A. E.; Kozin, Elliott D.; Darrow, Keith; Lehmann, Ashton; Boyden, Edward; Brown, M. Christian; Lee, Daniel J.

    2015-01-01

    The contemporary auditory brainstem implant (ABI) performance is limited by reliance on electrical stimulation with its accompanying channel cross talk and current spread to non-auditory neurons. A new generation ABI based on optogenetic-technology may ameliorate limitations fundamental to electrical neurostimulation. The most widely studied opsin is channelrhodopsin-2 (ChR2); however, its relatively slow kinetic properties may prevent the encoding of auditory information at high stimulation rates. In the present study, we compare the temporal resolution of light-evoked responses of a recently developed fast opsin, Chronos, to ChR2 in a murine ABI model. Viral mediated gene transfer via a posterolateral craniotomy was used to express Chronos or ChR2 in the mouse nucleus (CN). Following a four to six week incubation period, blue light (473 nm) was delivered via an optical fiber placed directly on the surface of the infected CN, and neural activity was recorded in the contralateral inferior colliculus (IC). Both ChR2 and Chronos evoked sustained responses to all stimuli, even at high driven rates. In addition, optical stimulation evoked excitatory responses throughout the tonotopic axis of the IC. Synchrony of the light-evoked response to stimulus rates of 14–448 pulses/s was higher in Chronos compared to ChR2 mice (p<0.05 at 56, 168, and 224 pulses/s). Our results demonstrate that Chronos has the ability to drive the auditory system at higher stimulation rates than ChR2 and may be a more ideal opsin for manipulation of auditory pathways in future optogenetic-based neuroprostheses. PMID:25598479

  7. Auditory processing in dysphonic children Processamento auditivo em crianças disfônicas

    Directory of Open Access Journals (Sweden)

    Mirian Aratangy Arnaut

    2011-06-01

    Full Text Available Contemporary cross-sectional cohort study. There is evidence of the auditory perception influence on the development of oral and written language, as well as on the self-perception of vocal conditions. The auditory system maturation can impact on this process. OBJECTIVE: To characterize the auditory skills of temporal ordering and localization in dysphonic children. MATERIALS AND METHODS: We assessed 42 children (4 to 8 years. Study group: 31 dysphonic children; Comparison group: 11 children without vocal change complaints. They all had normal auditory thresholds and also normal cochleo-eyelid reflexes. They were submitted to a Simplified assessment of the auditory process (Pereira, 1993. In order to compare the groups, we used the Mann-Whitney and Kruskal-Wallis statistical tests. Level of significance: 0.05 (5%. RESULTS: Upon simplified assessment, 100% of the Control Group and 61.29% of the Study Group had normal results. The groups were similar in the localization and verbal sequential memory tests. The nonverbal sequential memory showed worse results on dysphonic children. In this group, the performance was worse among the four to six years. CONCLUSION: The dysphonic children showed changes on the localization or temporal ordering skills, the skill of non-verbal temporal ordering differentiated the dysphonic group. In this group, the Sound Location improved with age.Estudo de coorte contemporânea com corte transversal. Há evidências da influência da percepção auditiva sobre o desenvolvimento da linguagem oral e escrita e da autopercepção das condições vocais. A maturação do sistema auditivo pode interferir nesse processo. OBJETIVO: Caracterizar habilidades auditivas de Localização e de Ordenação Temporal em crianças disfônicas. MATERIAL E MÉTODO: Avaliaram-se 42 crianças (4 a 8 anos. Grupo Pesquisa: 31 crianças disfônicas, Grupo de Comparação: 11 crianças sem queixas de alterações vocais. Todas apresentaram

  8. Periodicity extraction in the anuran auditory nerve. II: Phase and temporal fine structure.

    Science.gov (United States)

    Simmons, A M; Reese, G; Ferragamo, M

    1993-06-01

    phase locking to simple sinusoids. Increasing stimulus intensity also shifts the synchronized responses of some fibers away from the fundamental frequency to one of the low-frequency harmonics in the stimuli. These data suggest that the synchronized firing of bullfrog eighth nerve fibers operates to extract the waveform periodicity of complex, multiple-harmonic stimuli, and this periodicity extraction is influenced by the phase spectrum and temporal fine structure of the stimuli. The similarity in response patterns of amphibian papilla and basilar papilla fibers argues that the frog auditory system employs primarily a temporal mechanism for extraction of first harmonic periodicity.

  9. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  10. The effect of mild-to-moderate hearing loss on auditory and emotion processing networks

    Directory of Open Access Journals (Sweden)

    Fatima T Husain

    2014-02-01

    Full Text Available We investigated the impact of hearing loss on emotional processing using task- and rest-based functional magnetic resonance imaging. Two age-matched groups of middle-aged participants were recruited: one with bilateral high-frequency hearing loss (HL and a control group with normal hearing (NH. During the task-based portion of the experiment, participants were instructed to rate affective stimuli from the International Affective Digital Sounds database as pleasant, unpleasant, or neutral. In the resting state experiment, participants were told to fixate on a '+' sign on a screen for five minutes. The results of both the task-based and resting state studies suggest that NH and HL patients differ in their emotional response. Specifically, in the task-based study, we found slower response to affective but not neutral sounds by the HL group compared to the NH group. This was reflected in the brain activation patterns, with the NH group employing the expected limbic and auditory regions including the left amygdala, left parahippocampus, right middle temporal gyrus and left superior temporal gyrus to a greater extent in processing affective stimuli when compared to the HL group. In the resting state study, we observed no significant differences in connectivity of the auditory network between the groups. In the dorsal attention network, HL patients exhibited decreased connectivity between seed regions and left insula and left postcentral gyrus compared to controls. The default mode network was also altered, showing increased connectivity between seeds and left middle frontal gyrus in the HL group. Further targeted analysis revealed increased intrinsic connectivity between the right middle temporal gyrus and the right precentral gyrus. The results from both studies suggest neuronal reorganization as a consequence of hearing loss, most notably in networks responding to emotional sounds.

  11. Suprathreshold auditory processing deficits in noise: Effects of hearing loss and age.

    Science.gov (United States)

    Kortlang, Steffen; Mauermann, Manfred; Ewert, Stephan D

    2016-01-01

    People with sensorineural hearing loss generally suffer from a reduced ability to understand speech in complex acoustic listening situations, particularly when background noise is present. In addition to the loss of audibility, a mixture of suprathreshold processing deficits is possibly involved, like altered basilar membrane compression and related changes, as well as a reduced ability of temporal coding. A series of 6 monaural psychoacoustic experiments at 0.5, 2, and 6 kHz was conducted with 18 subjects, divided equally into groups of young normal-hearing, older normal-hearing and older hearing-impaired listeners, aiming at disentangling the effects of age and hearing loss on psychoacoustic performance in noise. Random frequency modulation detection thresholds (RFMDTs) with a low-rate modulator in wide-band noise, and discrimination of a phase-jittered Schroeder-phase from a random-phase harmonic tone complex are suggested to characterize the individual ability of temporal processing. The outcome was compared to thresholds of pure tones and narrow-band noise, loudness growth functions, auditory filter bandwidths, and tone-in-noise detection thresholds. At 500 Hz, results suggest a contribution of temporal fine structure (TFS) to pure-tone detection thresholds. Significant correlation with auditory thresholds and filter bandwidths indicated an impact of frequency selectivity on TFS usability in wide-band noise. When controlling for the effect of threshold sensitivity, the listener's age significantly correlated with tone-in-noise detection and RFMDTs in noise at 500 Hz, showing that older listeners were particularly affected by background noise at low carrier frequencies.

  12. Subthreshold K+ Channel Dynamics Interact With Stimulus Spectrum to Influence Temporal Coding in an Auditory Brain Stem Model

    Science.gov (United States)

    Day, Mitchell L.; Doiron, Brent; Rinzel, John

    2013-01-01

    Neurons in the auditory brain stem encode signals with exceptional temporal precision. A low-threshold potassium current, IKLT, present in many auditory brain stem structures and thought to enhance temporal encoding, facilitates spike selection of rapid input current transients through an associated dynamic gate. Whether the dynamic nature of IKLT interacts with the timescales in spectrally rich input to influence spike encoding remains unclear. We examine the general influence of IKLT on spike encoding of stochastic stimuli using a pattern classification analysis between spike responses from a ventral cochlear nucleus (VCN) model containing IKLT, and the same model with the dynamics removed. The influence of IKLT on spike encoding depended on the spectral content of the current stimulus such that maximal IKLT influence occurred for stimuli with power concentrated at frequencies low enough (<500 Hz) to allow IKLT activation. Further, broadband stimuli significantly decreased the influence of IKLT on spike encoding, suggesting that broadband stimuli are not well suited for investigating the influence of some dynamic membrane nonlinearities. Finally, pattern classification on spike responses was performed for physiologically realistic conductance stimuli created from various sounds filtered through an auditory nerve (AN) model. Regardless of the sound, the synaptic input arriving at VCN had similar low-pass power spectra, which led to a large influence of IKLT on spike encoding, suggesting that the subthreshold dynamics of IKLT plays a significant role in shaping the response of real auditory brain stem neurons. PMID:18057115

  13. The Impact of Mild Central Auditory Processing Disorder on School Performance during Adolescence

    Science.gov (United States)

    Heine, Chyrisse; Slone, Michelle

    2008-01-01

    Central Auditory Processing (CAP) difficulties have attained increasing recognition leading to escalating rates of referrals for evaluation. Recognition of the association between (Central) Auditory Processing Disorder ((C)APD) and language, learning, and literacy difficulties has resulted in increased referrals and detection in school-aged…

  14. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means...

  15. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  16. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  17. Encoding of sound localization cues by an identified auditory interneuron: effects of stimulus temporal pattern.

    Science.gov (United States)

    Samson, Annie-Hélène; Pollack, Gerald S

    2002-11-01

    An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for

  18. Shared and Divergent Auditory and Tactile Processing in Children with Autism and Children with Sensory Processing Dysfunction Relative to Typically Developing Peers.

    Science.gov (United States)

    Demopoulos, Carly; Brandes-Aitken, Annie N; Desai, Shivani S; Hill, Susanna S; Antovich, Ashley D; Harris, Julia; Marco, Elysa J

    2015-07-01

    The aim of this study was to compare sensory processing in typically developing children (TDC), children with Autism Spectrum Disorder (ASD), and those with sensory processing dysfunction (SPD) in the absence of an ASD. Performance-based measures of auditory and tactile processing were compared between male children ages 8-12 years assigned to an ASD (N=20), SPD (N=15), or TDC group (N=19). Both the SPD and ASD groups were impaired relative to the TDC group on a performance-based measure of tactile processing (right-handed graphesthesia). In contrast, only the ASD group showed significant impairment on an auditory processing index assessing dichotic listening, temporal patterning, and auditory discrimination. Furthermore, this impaired auditory processing was associated with parent-rated communication skills for both the ASD group and the combined study sample. No significant group differences were detected on measures of left-handed graphesthesia, tactile sensitivity, or form discrimination; however, more participants in the SPD group demonstrated a higher tactile detection threshold (60%) compared to the TDC (26.7%) and ASD groups (35%). This study provides support for use of performance-based measures in the assessment of children with ASD and SPD and highlights the need to better understand how sensory processing affects the higher order cognitive abilities associated with ASD, such as verbal and non-verbal communication, regardless of diagnostic classification.

  19. The impact of educational level on performance on auditory processing tests

    Directory of Open Access Journals (Sweden)

    Cristina F.B. Murphy

    2016-03-01

    Full Text Available Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor years of schooling was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  20. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  1. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA......, 1982, 416-425]. When a “classical” gammatone filterbank was applied within this spectrum-based model, the model largely underestimated human performance at high signal frequencies. However, this limitation could be resolved by employing an auditory filterbank with narrower filters. This novel...

  2. Prenatal IV Cocaine: Alterations in Auditory Information Processing

    Directory of Open Access Journals (Sweden)

    Charles F. Mactutus

    2011-06-01

    Full Text Available One clue regarding the basis of cocaine-induced deficits in attentional processing is provided by the clinical findings of changes in the infants’ startle response; observations buttressed by neurophysiological evidence of alterations in brainstem transmission time. Using the IV route of administration and doses that mimic the peak arterial levels of cocaine use in humans, the present study examined the effects of prenatal cocaine on auditory information processing via tests of the acoustic startle response (ASR, habituation, and prepulse inhibition (PPI in the offspring. Nulliparous Long-Evans female rats, implanted with an IV access port prior to breeding, were administered saline, 0.5, 1.0, or 3.0 mg/kg/injection of cocaine HCL (COC from gestation day (GD8-20 (1x/day-GD8-14, 2x/day-GD15-20. COC had no significant effects on maternal/litter parameters or growth of the offspring. At 18-20 days of age, one male and one female, randomly selected from each litter displayed an increased ASR (>30% for males at 1.0 mg/kg and >30% for females at 3.0 mg/kg. When reassessed in adulthood (D90-100, a linear dose-response increase was noted on response amplitude. At both test ages, within-session habituation was retarded by prenatal cocaine treatment. Testing the females in diestrus vs. estrus did not alter the results. Prenatal cocaine altered the PPI response function across interstimulus interval (ISI and induced significant sex-dependent changes in response latency. Idazoxan, an alpha2-adrenergic receptor antagonist, significantly enhanced the ASR, but less enhancement was noted with increasing doses of prenatal cocaine. Thus, in utero exposure to cocaine, when delivered via a protocol designed to capture prominent features of recreational usage, causes persistent, if not permanent, alterations in auditory information processing, and suggests dysfunction of the central noradrenergic circuitry modulating, if not mediating, these responses.

  3. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    Science.gov (United States)

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations.

  4. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    Science.gov (United States)

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  5. A Phenomenological Model of the Electrically Stimulated Auditory Nerve Fiber: Temporal and Biphasic Response Properties.

    Science.gov (United States)

    Horne, Colin D F; Sumner, Christian J; Seeber, Bernhard U

    2016-01-01

    We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.

  6. Locating Melody Processing Activity in Auditory Cortex with Magnetoencephalography.

    Science.gov (United States)

    Patterson, Roy D; Andermann, Martin; Uppenkamp, Stefan; Rupp, André

    2016-01-01

    This paper describes a technique for isolating the brain activity associated with melodic pitch processing. The magnetoencephalograhic (MEG) response to a four note, diatonic melody built of French horn notes, is contrasted with the response to a control sequence containing four identical, "tonic" notes. The transient response (TR) to the first note of each bar is dominated by energy-onset activity; the melody processing is observed by contrasting the TRs to the remaining melodic and tonic notes of the bar (2-4). They have uniform shape within a tonic or melodic sequence which makes it possible to fit a 4-dipole model and show that there are two sources in each hemisphere--a melody source in the anterior part of Heschl's gyrus (HG) and an onset source about 10 mm posterior to it, in planum temporale (PT). The N1m to the initial note has a short latency and the same magnitude for the tonic and the melodic sequences. The melody activity is distinguished by the relative sizes of the N1m and P2m components of the TRs to notes 2-4. In the anterior source a given note elicits a much larger N1m-P2m complex with a shorter latency when it is part of a melodic sequence. This study shows how to isolate the N1m, energy-onset response in PT, and produce a clean melody response in the anterior part of auditory cortex (HG).

  7. Assessment of anodal and cathodal transcranial direct current stimulation (tDCS) on MMN-indexed auditory sensory processing.

    Science.gov (United States)

    Impey, Danielle; de la Salle, Sara; Knott, Verner

    2016-06-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a very weak constant current to temporarily excite (anodal stimulation) or inhibit (cathodal stimulation) activity in the brain area of interest via small electrodes placed on the scalp. Currently, tDCS of the frontal cortex is being used as a tool to investigate cognition in healthy controls and to improve symptoms in neurological and psychiatric patients. tDCS has been found to facilitate cognitive performance on measures of attention, memory, and frontal-executive functions. Recently, a short session of anodal tDCS over the temporal lobe has been shown to increase auditory sensory processing as indexed by the Mismatch Negativity (MMN) event-related potential (ERP). This preliminary pilot study examined the separate and interacting effects of both anodal and cathodal tDCS on MMN-indexed auditory pitch discrimination. In a randomized, double blind design, the MMN was assessed before (baseline) and after tDCS (2mA, 20min) in 2 separate sessions, one involving 'sham' stimulation (the device is turned off), followed by anodal stimulation (to temporarily excite cortical activity locally), and one involving cathodal stimulation (to temporarily decrease cortical activity locally), followed by anodal stimulation. Results demonstrated that anodal tDCS over the temporal cortex increased MMN-indexed auditory detection of pitch deviance, and while cathodal tDCS decreased auditory discrimination in baseline-stratified groups, subsequent anodal stimulation did not significantly alter MMN amplitudes. These findings strengthen the position that tDCS effects on cognition extend to the neural processing of sensory input and raise the possibility that this neuromodulatory technique may be useful for investigating sensory processing deficits in clinical populations.

  8. Functional role of delta and theta band oscillations for auditory feedback processing during vocal pitch motor control.

    Science.gov (United States)

    Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A; Larson, Charles R

    2015-01-01

    The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM), relative pitch (RP), and absolute pitch (AP) musicians maintained vocalizations of a vowel sound and received randomized ± 100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked) fronto-central activation within the theta band (5-8 Hz) that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced) frontal activation within the delta band (1-4 Hz) that emerged at approximately 1 s after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE), indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control.

  9. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways.

  10. Effects of aging on peripheral and central auditory processing in rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Prévost, François; Guillemot, Jean-Paul

    2016-08-01

    Hearing loss is a hallmark sign in the elderly population. Decline in auditory perception provokes deficits in the ability to localize sound sources and reduces speech perception, particularly in noise. In addition to a loss of peripheral hearing sensitivity, changes in more complex central structures have also been demonstrated. Related to these, this study examines the auditory directional maps in the deep layers of the superior colliculus of the rat. Hence, anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats underwent distortion product of otoacoustic emissions (DPOAEs) to assess cochlear function. Then, auditory brainstem responses (ABRs) were assessed, followed by extracellular single-unit recordings to determine age-related effects on central auditory functions. DPOAE amplitude levels were decreased in aged rats although they were still present between 3.0 and 24.0 kHz. ABR level thresholds in aged rats were significantly elevated at an early (cochlear nucleus - wave II) stage in the auditory brainstem. In the superior colliculus, thresholds were increased and the tuning widths of the directional receptive fields were significantly wider. Moreover, no systematic directional spatial arrangement was present among the neurons of the aged rats, implying that the topographical organization of the auditory directional map was abolished. These results suggest that the deterioration of the auditory directional spatial map can, to some extent, be attributable to age-related dysfunction at more central, perceptual stages of auditory processing.

  11. Auditory processing in children : a study of the effects of age, hearing impairment and language impairment on auditory abilities in children

    NARCIS (Netherlands)

    Stollman, Martin Hubertus Petrus

    2003-01-01

    In this thesis we tested the hypotheses that the auditory system of children continues to mature until at least the age of 12 years and that the development of auditory processing in hearing-impaired and language-impaired children is often delayed or even genuinely disturbed. Data from a longitudin

  12. Accounting for the phenomenology and varieties of auditory verbal hallucination within a predictive processing framework.

    Science.gov (United States)

    Wilkinson, Sam

    2014-11-01

    Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for.

  13. An auditory illusion reveals the role of streaming in the temporal misallocation of perceptual objects.

    Science.gov (United States)

    Mehta, Anahita H; Jacoby, Nori; Yasin, Ifat; Oxenham, Andrew J; Shamma, Shihab A

    2017-02-19

    This study investigates the neural correlates and processes underlying the ambiguous percept produced by a stimulus similar to Deutsch's 'octave illusion', in which each ear is presented with a sequence of alternating pure tones of low and high frequencies. The same sequence is presented to each ear, but in opposite phase, such that the left and right ears receive a high-low-high … and a low-high-low … pattern, respectively. Listeners generally report hearing the illusion of an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. The current explanation of the illusion is that it reflects an illusory feature conjunction of pitch and perceived location. Using psychophysics and electroencephalogram measures, we test this and an alternative hypothesis involving synchronous and sequential stream segregation, and investigate potential neural correlates of the illusion. We find that the illusion of alternating tones arises from the synchronous tone pairs across ears rather than sequential tones in one ear, suggesting that the illusion involves a misattribution of time across perceptual streams, rather than a misattribution of location within a stream. The results provide new insights into the mechanisms of binaural streaming and synchronous sound segregation.This article is part of the themed issue 'Auditory and visual scene analysis'.

  14. The temporal relationship between the brainstem and primary cortical auditory evoked potentials.

    Science.gov (United States)

    Shaw, N A

    1995-10-01

    Many methods are employed in order to define more precisely the generators of an evoked potential (EP) waveform. One technique is to compare the timing of an EP whose origin is well established with that of one whose origin is less certain. In the present article, the latency of the primary cortical auditory evoked potential (PCAEP) was compared to each of the seven subcomponents which compose the brainstem auditory evoked potential (BAEP). The data for this comparison was derived from a retrospective analysis of previous recordings of the PCAEP and BAEP. Central auditory conduction time (CACT) was calculated by subtracting the latency of the cochlear nucleus BAEP component (wave III) from that of the PCAEP. It was found that CACT in humans is 12 msec which is more than double that of central somatosensory conduction time. The interpeak latencies between BAEP waves V, VI, and VII and the PCAEP were also calculated. It was deduced that all three waves must have an origin rather more caudally within the central auditory system than is commonly supposed. In addition, it is demonstrated that the early components of the middle latency AEP (No and Na) largely reside within the time domain between the termination of the BAEP components and the PCAEP which would be consistent with their being far field reflections of midbrain and subcortical auditory activity. It is concluded that as the afferent volley ascends the central auditory pathways, it generates not a sequence of high frequency BAEP responses but rather a succession of slower post-synaptic waves. The only means of reconciling the timing of the BAEP waves with that of the PCAEP is to assume that the generation of all the BAEP components must be largely restricted to a quite confined region within the auditory nerve and the lower half of the pons.

  15. Tuned with a tune: Talker normalization via general auditory processes

    Directory of Open Access Journals (Sweden)

    Erika J C Laing

    2012-06-01

    Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

  16. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism:An EEG Power and BOLD fMRI Investigation

    Directory of Open Access Journals (Sweden)

    Elizabeth C Hames

    2016-04-01

    Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2­VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.

  17. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  18. Sparse Spectro-Temporal Receptive Fields Based on Multi-Unit and High-Gamma Responses in Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Rick L Jenison

    Full Text Available Spectro-Temporal Receptive Fields (STRFs were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM. A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl's gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl's gyrus recordings elicited by click-train stimuli.

  19. Cognitive function predicts neural activity associated with pre-attentive temporal processing.

    Science.gov (United States)

    Foster, Shannon M; Kisley, Michael A; Davis, Hasker P; Diede, Nathaniel T; Campbell, Alana M; Davalos, Deana B

    2013-01-01

    Temporal processing, or processing time-related information, appears to play a significant role in a variety of vital psychological functions. One of the main confounds to assessing the neural underpinnings and cognitive correlates of temporal processing is that behavioral measures of timing are generally confounded by other supporting cognitive processes, such as attention. Further, much theorizing in this field has relied on findings from clinical populations (e.g., individuals with schizophrenia) known to have temporal processing deficits. In this study, we attempted to avoid these difficulties by comparing temporal processing assessed by a pre-attentive event-related brain potential (ERP) waveform, the mismatch negativity (MMN) elicited by time-based stimulus features, to a number of cognitive functions within a non-clinical sample. We studied healthy older adults (without dementia), as this population inherently ensures more prominent variability in cognitive function than a younger adult sample, allowing for the detection of significant relationships between variables. Using hierarchical regression analyses, we found that verbal memory and executive functions (i.e., planning and conditional inhibition, but not set-shifting) uniquely predicted variance in temporal processing beyond that predicted by the demographic variables of age, gender, and hearing loss. These findings are consistent with a frontotemporal model of MMN waveform generation in response to changes in the temporal features of auditory stimuli.

  20. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    Lyon RF

    2007-01-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as "design curves" for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a "missing link" between physiological, electrical, and mechanical models for auditory filtering.

  1. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    R. F. Lyon

    2007-12-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as “design curves” for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a “missing link” between physiological, electrical, and mechanical models for auditory filtering.

  2. Auditory pre-attentive processing of Chinese tones

    Institute of Scientific and Technical Information of China (English)

    YANG Li-jun; CAO Ke-li; WEI Chao-gang; LIU Yong-zhi

    2008-01-01

    Background Chinese tones are considered important in Chinese discrimination.However,the relevant reports on auditory central mechanisms concerning Chinese tones are limited.In this study,mismatch negativity (MMN),one of the event related potentials (ERP),was used to investigate pre-attentive processing of Chinese tones,and the differences between the function of oddball MMN and that of control MMN are discussed.Methods Ten subjects (six men and four women) with normal hearing participated in the study.A sequence was presented to these subjects through a loudspeaker,the sequence included four blocks,a control block and three oddball blocks.The control block was made up of five components (one pure tone and four Chinese tones) with equiprobability.The oddball blocks were made up of two components,one was a standard stimulus (tone 1) and the other was a deviant stimulus (tone 2 or tone 3 or tone 4).Electroencephalogram (EEG) data were recorded when the sequence was presented and MMNs were obtained from the analysis of the EEG data.Results Two kinds of MMNs were obtained,oddball MMN and control MMN.Oddball MMN was obtained by subtracting the ERP elicited by standard stimulation (tone 1) from that elicited by deviant stimulation (tone 2 or tone 3 or tone 4) in the oddball block; control MMN was obtained by subtracting the ERP elicited by the tone in control block,which was the same tone as the deviant stimulation in the oddball block,from the ERP elicited by deviant stimulation (tone 2 or tone 3 or tone 4)in the oddball block.There were two negative waves in oddball MMN,one appeared around 150 ms (oddball MMN 1),the other around 300 ms (oddball MMN 2).Only one negative wave appeared around 300 ms in control MMN,which was corresponding to the oddball MMN 2.We performed the statistical analyses in each paradigm for latencies and amplitudes for oddball MMN 2 in discriminating the three Chinese tones and reported no significant differences.But the latencies and amplitudes

  3. Intermodal auditory, visual, and tactile attention modulates early stages of neural processing.

    Science.gov (United States)

    Karns, Christina M; Knight, Robert T

    2009-04-01

    We used event-related potentials (ERPs) and gamma band oscillatory responses (GBRs) to examine whether intermodal attention operates early in the auditory, visual, and tactile modalities. To control for the effects of spatial attention, we spatially coregistered all stimuli and varied the attended modality across counterbalanced blocks in an intermodal selection task. In each block, participants selectively responded to either auditory, visual, or vibrotactile stimuli from the stream of intermodal events. Auditory and visual ERPs were modulated at the latencies of early cortical processing, but attention manifested later for tactile ERPs. For ERPs, auditory processing was modulated at the latency of the Na (29 msec), which indexes early cortical or thalamocortical processing and the subsequent P1 (90 msec) ERP components. Visual processing was modulated at the latency of the early phase of the C1 (62-72 msec) thought to be generated in the primary visual cortex and the subsequent P1 and N1 (176 msec). Tactile processing was modulated at the latency of the N160 (165 msec) likely generated in the secondary association cortex. Intermodal attention enhanced early sensory GBRs for all three modalities: auditory (onset 57 msec), visual (onset 47 msec), and tactile (onset 27 msec). Together, these results suggest that intermodal attention enhances neural processing relatively early in the sensory stream independent from differential effects of spatial and intramodal selective attention.

  4. Phonological working memory and auditory processing speed in children with specific language impairment

    Directory of Open Access Journals (Sweden)

    Fatemeh Haresabadi

    2015-02-01

    Full Text Available Background and Aim: Specific language impairment (SLI, one variety of developmental language disorder, has attracted much interest in recent decades. Much research has been conducted to discover why some children have a specific language impairment. So far, research has failed to identify a reason for this linguistic deficiency. Some researchers believe language disorder causes defects in phonological working memory and affects auditory processing speed. Therefore, this study reviews the results of research investigating these two factors in children with specific language impairment.Recent Findings: Studies have shown that children with specific language impairment face constraints in phonological working memory capacity. Memory deficit is one possible cause of linguistic disorder in children with specific language impairment. However, in these children, disorder in information processing speed is observed, especially regarding the auditory aspect.Conclusion: Much more research is required to adequately explain the relationship between phonological working memory and auditory processing speed with language. However, given the role of phonological working memory and auditory processing speed in language acquisition, a focus should be placed on phonological working memory capacity and auditory processing speed in the assessment and treatment of children with a specific language impairment.

  5. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  6. Response to own name in children: ERP study of auditory social information processing.

    Science.gov (United States)

    Key, Alexandra P; Jones, Dorita; Peters, Sarika U

    2016-09-01

    Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses.

  7. Temporal processing characteristics of the Ponzo illusion.

    Science.gov (United States)

    Schmidt, Filipp; Haberkamp, Anke

    2016-03-01

    Many visual illusions result from assumptions of our visual system that are based on its long-term adaptation to our visual environment. Thus, visual illusions provide the opportunity to identify and learn about these fundamental assumptions. In this paper, we investigate the Ponzo illusion. Although many previous studies researched visual processing of the Ponzo illusion, only very few considered temporal processing aspects. However, it is well known that our visual percept is modulated by temporal factors. First, we used the Ponzo illusion as prime in a response priming task to test whether it modulates subsequent responses to the longer (or shorter) of two target bars. Second, we used the same stimuli in a perceptual task to test whether the Ponzo illusion is effective for very short presentation times (12 ms). We observed considerable priming effects that were of similar magnitude as those of a control condition. Moreover, the variations in the priming effects as a function of prime-target stimulus-onset asynchrony were very similar to that of the control condition. However, when analyzing priming effects as a function of participants' response speed, effects for the Ponzo illusion increased in slower responses. We conclude that although the illusion is established rapidly within the visual system, the full integration of context information is based on more time-consuming and later visual processing.

  8. Automatic and Controlled Attention Processes in Auditory Detection.

    Science.gov (United States)

    1981-02-01

    Research (Code 458) ...WKo. PAGE Arlington, Virginia 22217 _1 14 MONITORING AGENCY NAME 6 AODRESS(if dlN.UI be. C6w.lbojf Oee) IS. SECURITY CLASS. (of...o Ig aemep ad *sN I0p WoolF USI .. ) attention, dichotic listening, individual diff*erences, time-sharing, memory search, visual search, auditory...Charles V. Hutchins Code N-711 Naval Air Systems Command Hq NAVTRAEQUIPCEN A IR-34OF Orlando , FL 32813 Navy Department Washington, DC 20361 Chief of Naval

  9. Cerebral processing of auditory stimuli in patients with irritable bowel syndrome

    Institute of Scientific and Technical Information of China (English)

    Viola Andresen; Peter Kobelt; Claus Zimmer; Bertram Wiedenmann; Burghard F Klapp; Hubert Monnikes; Alexander Poellinger; Chedwa Tsrouya; Dominik Bach; Albrecht Stroh; Annette Foerschler; Petra Georgiewa; Marco Schmidtmann; Ivo R van der Voort

    2006-01-01

    AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms,we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality.METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans.Statistical maps were constructed using the general linear model.RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds.To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations.CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affectire reactivity, possibly associated with the psychological comorbidity often found in IBS patients.

  10. An assessment technique for children with auditory-language processing problems.

    Science.gov (United States)

    Sanger, D D; Keith, R W; Maher, B A

    1987-08-01

    The purpose of this study was to develop a new multilayer clinical assessment technique to evaluate auditory-language processing abilities in children. Following a 90-min in-service workshop on auditory-language processing problems, 46 nonhandicapped first-, second-, and third-grade students were referred by their classroom teachers for an evaluation of auditory-language processing abilities. Twelve "normally" achieving first-, second-, and third-grade students were randomly selected as controls. Standardized and nonstandardized measures included a pure tone and impedance test, selected subtests of the Clinical Evaluation of Language Functions (Linguistic Concepts, Relationships and Ambiguities, Oral Directions, Spoken Paragraphs, Word Associations, and Model Sentences), the Goldman-Fristoe-Woodcock (GFW) Memory for Sequence Test, Sound Mimicry Test, Sound-Symbol Association Test, and the GFW Test of Auditory Discrimination. Nonstandardized measures included an Observational Profile of Classroom Communication and an informal language sample. Results indicated 87% of 46 (n = 40) children were identified as having auditory-language processing problems. In-service training was an effective means to heighten teachers' awareness for referring subjects. Additionally, the Observational Profile of Classroom Communication was an effective procedure for teachers to systematically observe and document communication behaviors in the context of the classroom.

  11. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Tollkötter Melanie

    2006-08-01

    Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.

  12. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  13. Early auditory processing in musicians and dancers during a contemporary dance piece

    Science.gov (United States)

    Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari

    2016-01-01

    The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other. PMID:27611929

  14. Early auditory processing in musicians and dancers during a contemporary dance piece.

    Science.gov (United States)

    Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari

    2016-09-09

    The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other.

  15. Improving video processing performance using temporal reasoning

    Science.gov (United States)

    Ahmed, Mohamed; Karmouch, Ahmed

    1999-10-01

    In this paper, we present a system, called MediABS, for extracting key frames in a video segment. First we will describe the overall architecture of the system and we will show how our system can handle multiple video formats with a single video-processing module. Then we will present a new algorithm, based on color histograms. The algorithm exploits the temporal characteristic of the visual information and provides techniques for avoiding false cuts and eliminating the possibility of missing true cuts. A discussion, along with some results, will be provided to show the merits of our algorithm compared to existing related algorithms. Finally we will discuss the performance (in terms of processing time and accuracy) obtained by our system in extracting the key frames from a video segment. This work is part of the Mobile Agents Alliance project involving University of Ottawa, National Research Council (NRC) and Mitel Corporation.

  16. Asymmetric excitatory synaptic dynamics underlie interaural time difference processing in the auditory system.

    Directory of Open Access Journals (Sweden)

    Pablo E Jercog

    Full Text Available Low-frequency sound localization depends on the neural computation of interaural time differences (ITD and relies on neurons in the auditory brain stem that integrate synaptic inputs delivered by the ipsi- and contralateral auditory pathways that start at the two ears. The first auditory neurons that respond selectively to ITD are found in the medial superior olivary nucleus (MSO. We identified a new mechanism for ITD coding using a brain slice preparation that preserves the binaural inputs to the MSO. There was an internal latency difference for the two excitatory pathways that would, if left uncompensated, position the ITD response function too far outside the physiological range to be useful for estimating ITD. We demonstrate, and support using a biophysically based computational model, that a bilateral asymmetry in excitatory post-synaptic potential (EPSP slopes provides a robust compensatory delay mechanism due to differential activation of low threshold potassium conductance on these inputs and permits MSO neurons to encode physiological ITDs. We suggest, more generally, that the dependence of spike probability on rate of depolarization, as in these auditory neurons, provides a mechanism for temporal order discrimination between EPSPs.

  17. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  18. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  19. The Role of the Auditory Brainstem in Processing Linguistically-Relevant Pitch Patterns

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2009-01-01

    Historically, the brainstem has been neglected as a part of the brain involved in language processing. We review recent evidence of language-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem. We argue that there is enhancing…

  20. Auditory Processing Disorder in Children with Reading Disabilities: Effect of Audiovisual Training

    Science.gov (United States)

    Veuillet, Evelyne; Magnan, Annie; Ecalle, Jean; Thai-Van, Hung; Collet, Lionel

    2007-01-01

    Reading disability is associated with phonological problems which might originate in auditory processing disorders. The aim of the present study was 2-fold: first, the perceptual skills of average-reading children and children with dyslexia were compared in a categorical perception task assessing the processing of a phonemic contrast based on…

  1. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    Science.gov (United States)

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody.

  2. Cross-modal training induces changes in spatial representations early in the auditory processing pathway.

    Science.gov (United States)

    Bruns, Patrick; Liebnau, Ronja; Röder, Brigitte

    2011-09-01

    In the ventriloquism aftereffect, brief exposure to a consistent spatial disparity between auditory and visual stimuli leads to a subsequent shift in subjective sound localization toward the positions of the visual stimuli. Such rapid adaptive changes probably play an important role in maintaining the coherence of spatial representations across the various sensory systems. In the research reported here, we used event-related potentials (ERPs) to identify the stage in the auditory processing stream that is modulated by audiovisual discrepancy training. Both before and after exposure to synchronous audiovisual stimuli that had a constant spatial disparity of 15°, participants reported the perceived location of brief auditory stimuli that were presented from central and lateral locations. In conjunction with a sound localization shift in the direction of the visual stimuli (the behavioral ventriloquism aftereffect), auditory ERPs as early as 100 ms poststimulus (N100) were systematically modulated by the disparity training. These results suggest that cross-modal learning was mediated by a relatively early stage in the auditory cortical processing stream.

  3. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  4. Deficient auditory processing in children with Asperger Syndrome, as indexed by event-related potentials.

    Science.gov (United States)

    Jansson-Verkasalo, Eira; Ceponiene, Rita; Kielinen, Marko; Suominen, Kalervo; Jäntti, Ville; Linna, Sirkka Liisa; Moilanen, Irma; Näätänen, Risto

    2003-03-06

    Asperger Syndrome (AS) is characterized by normal language development but deficient understanding and use of the intonation and prosody of speech. While individuals with AS report difficulties in auditory perception, there are no studies addressing auditory processing at the sensory level. In this study, event-related potentials (ERP) were recorded for syllables and tones in children with AS and in their control counterparts. Children with AS displayed abnormalities in transient sound-feature encoding, as indexed by the obligatory ERPs, and in sound discrimination, as indexed by the mismatch negativity. These deficits were more severe for the tone stimuli than for the syllables. These results indicate that auditory sensory processing is deficient in children with AS, and that these deficits might be implicated in the perceptual problems encountered by children with AS.

  5. Auditory Signal Processing in Communication: Perception and Performance of Vocal Sounds

    Science.gov (United States)

    Prather, Jonathan F.

    2013-01-01

    Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717

  6. The role of the auditory brainstem in processing musically relevant pitch.

    Science.gov (United States)

    Bidelman, Gavin M

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  7. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  8. Activity-dependent transmission and integration control the timescales of auditory processing at an inhibitory synapse.

    Science.gov (United States)

    Ammer, Julian J; Siveke, Ida; Felmy, Felix

    2015-06-15

    To capture the context of sensory information, neural networks must process input signals across multiple timescales. In the auditory system, a prominent change in temporal processing takes place at an inhibitory GABAergic synapse in the dorsal nucleus of the lateral lemniscus (DNLL). At this synapse, inhibition outlasts the stimulus by tens of milliseconds, such that it suppresses responses to lagging sounds, and is therefore implicated in echo suppression. Here, we untangle the cellular basis of this inhibition. We demonstrate with in vivo whole-cell patch-clamp recordings in Mongolian gerbils that the duration of inhibition increases with sound intensity. Activity-dependent spillover and asynchronous release translate the high presynaptic firing rates found in vivo into a prolonged synaptic output in acute slice recordings. A key mechanism controlling the inhibitory time course is the passive integration of the hyperpolarizing inhibitory conductance. This prolongation depends on the synaptic conductance amplitude. Computational modeling shows that this prolongation is a general mechanism and relies on a non-linear effect caused by synaptic conductance saturation when approaching the GABA reversal potential. The resulting hyperpolarization generates an efficient activity-dependent suppression of action potentials without affecting the threshold or gain of the input-output function. Taken together, the GABAergic inhibition in the DNLL is adjusted to the physiologically relevant duration by passive integration of inhibition with activity-dependent synaptic kinetics. This change in processing timescale combined with the reciprocal connectivity between the DNLLs implements a mechanism to suppress the distracting localization cues of echoes and helps to localize the initial sound source reliably.

  9. [A Role of the Basal Ganglia in Processing of Complex Sounds and Auditory Attention].

    Science.gov (United States)

    Silkis, I G

    2015-01-01

    A hypothetical mechanism is suggested for processing of complex sounds and auditory attention in parallel neuronal loops including various auditory cortical areas connected with parts of the medial geniculate body, inferior colliculus and basal ganglia. Release of dopamine in the striatum promotes bidirectional modulation of strong and weak inputs from the neocortex to striatal neurons giving rise to direct and indirect pathways through the basal ganglia. Subsequent synergistic disinhibition of one and inhibition of other groups of thalamic neurons by the basal ganglia result in the creation of contrasted neuronal representations of properties of auditory stimuli in related cortical areas. Contrasting is strengthened due to a simultaneous disinhibition of pedunculopontine nucleus and action at muscarine receptors on neurons in the medial geniculate body. It follows from this mechanism that involuntary attention to sound tone can enhance an early component of the responses of neurons in the primary auditory cortical area (50 msec) in the absence of dopamine due to a disinhibition of thalamic neurons via the direct pathway through the basal ganglia, whereas voluntary attention to complex sounds can enhance only those components of responses of neurones in secondary auditory cortical areas which latencies exceeds latencies of dopaminergic cells (i.e. after 100 msec). Various consequences of proposed mechanism are in agreement with known experimental data.

  10. Entropical Aspects in Auditory Processes and Psychoacoustical Law of Weber-Fechner

    Science.gov (United States)

    Cosma, I.; Popescu, D. I.

    For hearing sense, the mechanoreceptors fire action potentials when their membranes are physically stretched. Based on the statistical physics, we analyzed the entropical aspects in auditory processes of hearing. We develop a model that connects the logarithm of relative intensity of sound (loudness) to the level of energy disorder within the system of cellular sensory system. The increasing of entropy and disorder in the system is connected to the free energy available to signal the production of action potentials in inner hair cells of the vestibulocochlear auditory organ.

  11. Sensory Processing: Advances in Understanding Structure and Function of Pitch-Shifted Auditory Feedback in Voice Control

    Directory of Open Access Journals (Sweden)

    Charles R Larson

    2016-02-01

    Full Text Available The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor control. Another use of this technique requires subjects to voluntarily change the pitch of their voice when they hear a pitch shift stimulus. Under these conditions, short latency responses are produced that change voice pitch to match that of the stimulus. The pitch-shift technique has been used with magnetoencephalography (MEG and electroencephalography (EEG recordings, and has shown that at vocal onset there is normally a suppression of neural activity related to vocalization. However, if a pitch-shift is also presented at voice onset, there is a cancellation of this suppression, which has been interpreted to mean that one way in which a person distinguishes self-vocalization from vocalization of others is by a comparison of the intended voice and the actual voice. Studies of the pitch shift reflex in the fMRI environment show that the superior temporal gyrus (STG plays an important role in the process of controlling voice F0 based on auditory feedback. Additional studies using fMRI for effective connectivity modeling show that the left and right STG play critical roles in correcting for an error in voice production. While both the left and right STG are involved in this process, a feedback loop develops between left and right STG during perturbations, in which the left to right connection becomes stronger, and a new negative right to left connection emerges along with the emergence of other feedback loops within the cortical network tested.

  12. CONTRALATERAL SUPPRESSION OF DISTORTION PRODUCT OTOACOUSTIC EMISSION IN CHILDREN WITH AUDITORY PROCESSING DISORDERS

    Institute of Scientific and Technical Information of China (English)

    Jessica Oppee; SUN Wei; Nancy Stecker

    2014-01-01

    Previous research has demonstrated that the amplitude of evoked emissions decreases in human sub-jects when the contralateral ear is stimulated by noise. The medial olivocochlear bundle (MOCB) is be-lieved to control this phenomenon. Recent research has examined this effect in individuals with auditory pro-cessing disorders (APD), specifically with difficulty understanding speech in noise. Results showed tran-sient evoked otoacoustic emissions (TEOAEs) were not affected by contralateral stimulation in these sub-jects. Much clinical research has measured the function of the MOCB through TEOAEs.This study will use an alternative technique, distortion product otoacoustic emissions (DPOAEs), to examine this phenomenon and evaluate the function of the MOCB. DPOAEs of individuals in a control group with normal hearing and no significant auditory processing difficulties were compared to the DPOAEs of children with signifi-cant auditory processing difficulties.Results showed that the suppression effect was observed in the control group at 2 kHz with 3 kHz of narrowband noise. For the auditory processing disorders group, no significant suppression was observed.Overall, DPOAEs showed suppression with contralateral noise, while the APD group levels increased overall.These results provide further evidence that the MOCB may have reduced function in children with APD.

  13. The Comparative and Developmental Study of Auditory Information Processing in Autistic Adults.

    Science.gov (United States)

    Nakamura, Kenryu; And Others

    1986-01-01

    The study examined brain functions related to information processing in autistic adults using auditory evoked potentials (AEP) and missing stimulus potentials (MSP). Both nonautistic and autistic adults showed normal mature patterns and lateralities in AEP for music stimuli, but nonautistic children did not. Autistic adults showed matured patterns…

  14. Exploration of Teachers' Awareness and Knowledge of (Central) Auditory Processing Disorder ((C)APD)

    Science.gov (United States)

    Ryan, Anita; Logue-Kennedy, Maria

    2013-01-01

    The aim of this study was to explore primary school teachers' awareness and knowledge of (Central) Auditory Processing Disorder ((C)APD). Teachers' awareness and knowledge are crucial for initial recognition and appropriate referral of children suspected of having (C)APD. When a child is diagnosed with (C)APD, teachers have a role in implementing…

  15. Peeling the Onion of Auditory Processing Disorder: A Language/Curricular-Based Perspective

    Science.gov (United States)

    Wallach, Geraldine P.

    2011-01-01

    Purpose: This article addresses auditory processing disorder (APD) from a language-based perspective. The author asks speech-language pathologists to evaluate the functionality (or not) of APD as a diagnostic category for children and adolescents with language-learning and academic difficulties. Suggestions are offered from a…

  16. A Binaural Neuromorphic Auditory Sensor for FPGA: A Spike Signal Processing Approach.

    Science.gov (United States)

    Jimenez-Fernandez, Angel; Cerezuela-Escudero, Elena; Miro-Amarante, Lourdes; Dominguez-Moralse, Manuel Jesus; de Asis Gomez-Rodriguez, Francisco; Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel

    2017-04-01

    This paper presents a new architecture, design flow, and field-programmable gate array (FPGA) implementation analysis of a neuromorphic binaural auditory sensor, designed completely in the spike domain. Unlike digital cochleae that decompose audio signals using classical digital signal processing techniques, the model presented in this paper processes information directly encoded as spikes using pulse frequency modulation and provides a set of frequency-decomposed audio information using an address-event representation interface. In this case, a systematic approach to design led to a generic process for building, tuning, and implementing audio frequency decomposers with different features, facilitating synthesis with custom features. This allows researchers to implement their own parameterized neuromorphic auditory systems in a low-cost FPGA in order to study the audio processing and learning activity that takes place in the brain. In this paper, we present a 64-channel binaural neuromorphic auditory system implemented in a Virtex-5 FPGA using a commercial development board. The system was excited with a diverse set of audio signals in order to analyze its response and characterize its features. The neuromorphic auditory system response times and frequencies are reported. The experimental results of the proposed system implementation with 64-channel stereo are: a frequency range between 9.6 Hz and 14.6 kHz (adjustable), a maximum output event rate of 2.19 Mevents/s, a power consumption of 29.7 mW, the slices requirements of 11141, and a system clock frequency of 27 MHz.

  17. Age, dyslexia subtype and comorbidity modulate rapid auditory processing in developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Maria Luisa eLorusso

    2014-05-01

    Full Text Available The nature of Rapid Auditory Processing (RAP deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of nonverbal tone sequences.Participants were 46 children aged 8 - 14 (26 with dyslexia, subdivided according to age, presence of a previous language delay, and type of dyslexia. Experimental tasks were a Temporal Order Judgment (TOJ (manipulating tone length, ISI and sequence length, and a Pattern Discrimination Task. Dyslexic children showed general RAP deficits. Tone length and ISI influenced dyslexic and control children’s performance in a similar way, but dyslexic children were more affected by an increase from 2 to 5 sounds. As to age, older dyslexic children’s difficulty in reproducing sequences of 4 and 5 tones was similar to that of normally reading younger (but not older children. In the analysis of subgroup profiles, the crucial variable appears to be the advantage, or lack thereof, in processing long vs short sounds. Dyslexic children with a previous language delay obtained the lowest scores in RAP measures, but they performed worse with shorter stimuli, similar to control children, while dyslexic-only children showed no advantage for longer stimuli. As to dyslexia subtype, only surface dyslexics improved their performance with longer stimuli, while phonological dyslexics did not. Differential scores for short vs long tones and for long vs short ISIs predict nonword and word reading, respectively, and the former correlate with phonemic awareness.In conclusion, the relationship between nonverbal RAP, phonemic skills and reading abilities appears to be characterized by complex interactions with

  18. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  19. Auditory Backward Masking Deficits in Children with Reading Disabilities

    Science.gov (United States)

    Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.

    2005-01-01

    Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…

  20. Aphasia and Auditory Processing after Stroke through an International Classification of Functioning, Disability and Health Lens.

    Science.gov (United States)

    Purdy, Suzanne C; Wanigasekara, Iruni; Cañete, Oscar M; Moore, Celia; McCann, Clare M

    2016-08-01

    Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke.

  1. Peripheral auditory processing changes seasonally in Gambel's white-crowned sparrow.

    Science.gov (United States)

    Caras, Melissa L; Brenowitz, Eliot; Rubel, Edwin W

    2010-08-01

    Song in oscine birds is a learned behavior that plays important roles in breeding. Pronounced seasonal differences in song behavior and in the morphology and physiology of the neural circuit underlying song production are well documented in many songbird species. Androgenic and estrogenic hormones largely mediate these seasonal changes. Although much work has focused on the hormonal mechanisms underlying seasonal plasticity in songbird vocal production, relatively less work has investigated seasonal and hormonal effects on songbird auditory processing, particularly at a peripheral level. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a highly seasonal breeder. Photoperiod and hormone levels were manipulated in the laboratory to simulate natural breeding and non-breeding conditions. Peripheral auditory function was assessed by measuring the auditory brainstem response (ABR) and distortion product otoacoustic emissions (DPOAEs) of males and females in both conditions. Birds exposed to breeding-like conditions demonstrated elevated thresholds and prolonged peak latencies when compared with birds housed under non-breeding-like conditions. There were no changes in DPOAEs, however, which indicates that the seasonal differences in ABRs do not arise from changes in hair cell function. These results suggest that seasons and hormones impact auditory processing as well as vocal production in wild songbirds.

  2. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy.

    Science.gov (United States)

    Rocha, Caroline Nunes; Miziara, Carmen Silvia Molleis Galego; Manreza, Maria Luiza Giraldes de; Schochat, Eliane

    2010-02-01

    The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE) for subjects with left mesial temporal sclerosis (LMTS) in relation to the behavioral test-Dichotic Digits Test (DDT), event-related potential (P300), and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.

  3. The effects of visual training on multisensory temporal processing

    Science.gov (United States)

    Stevenson, Ryan A.; Wilson, Magdalena M.; Powers, Albert R.; Wallace, Mark T.

    2013-01-01

    The importance of multisensory integration for human behavior and perception is well documented, as is the impact that temporal synchrony has on driving such integration. Thus, the more temporally coincident two sensory inputs from different modalities are, the more likely they will be perceptually bound. This temporal integration process is captured by the construct of the temporal binding window - the range of temporal offsets within which an individual is able to perceptually bind inputs across sensory modalities. Recent work has shown that this window is malleable, and can be narrowed via a multisensory perceptual feedback training process. In the current study, we seek to extend this by examining the malleability of the multisensory temporal binding window through changes in unisensory experience. Specifically, we measured the ability of visual perceptual feedback training to induce changes in the multisensory temporal binding window. Visual perceptual training with feedback successfully improved temporal visual processing and more importantly, this visual training increased the temporal precision across modalities, which manifested as a narrowing of the multisensory temporal binding window. These results are the first to establish the ability of unisensory temporal training to modulate multisensory temporal processes, findings that can provide mechanistic insights into multisensory integration and which may have a host of practical applications. PMID:23307155

  4. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  5. A hierarchy of event-related potential markers of auditory processing in disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Steve Beukema

    2016-01-01

    Full Text Available Functional neuroimaging of covert perceptual and cognitive processes can inform the diagnoses and prognoses of patients with disorders of consciousness, such as the vegetative and minimally conscious states (VS;MCS. Here we report an event-related potential (ERP paradigm for detecting a hierarchy of auditory processes in a group of healthy individuals and patients with disorders of consciousness. Simple cortical responses to sounds were observed in all 16 patients; 7/16 (44% patients exhibited markers of the differential processing of speech and noise; and 1 patient produced evidence of the semantic processing of speech (i.e. the N400 effect. In several patients, the level of auditory processing that was evident from ERPs was higher than the abilities that were evident from behavioural assessment, indicating a greater sensitivity of ERPs in some cases. However, there were no differences in auditory processing between VS and MCS patient groups, indicating a lack of diagnostic specificity for this paradigm. Reliably detecting semantic processing by means of the N400 effect in passively listening single-subjects is a challenge. Multiple assessment methods are needed in order to fully characterise the abilities of patients with disorders of consciousness.

  6. Musical intervention enhances infants' neural processing of temporal structure in music and speech.

    Science.gov (United States)

    Zhao, T Christina; Kuhl, Patricia K

    2016-05-10

    Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.

  7. Visual or Auditory Processing Style and Strategy Effectiveness.

    Science.gov (United States)

    Weed, Keri; Ryan, Ellen Bouchard

    In a study that investigated differences in the processing styles of beginning readers, a Pictograph Sentence Memory Test (PSMT) was administered to first and second grade students to determine their processing style as well as to assess instructional effects. Based on their responses to the PSMT, the children were classified as either visual or…

  8. [Auditory-evoked responses to a monaural or a binaural click, recorded from the vertex, as in two temporal derivations; effect of interaural time differences (author's transl)].

    Science.gov (United States)

    Botte, M C; Chocholle, R

    1976-01-01

    The auditory-evoked responses have been recorded on 5 subject by vertex, right temporal and left temporal electrodes simultaneously. 30 dB sensation level clicks were used as stimuli; one click was presented only to the right ear, or one click only to the left ear, or one click to the right ear and another click to the left ear with a variable interaural time difference in this latter case (0-150 ms). The N-P amplitude variations and the N and P latency variations have been studied and compared to those observed in the perceived lateralizations of the sound source.

  9. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    OpenAIRE

    Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...

  10. Lateralization of music processing with noises in the auditory cortex: an fNIRS study

    OpenAIRE

    Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik

    2014-01-01

    The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...

  11. Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus

    Directory of Open Access Journals (Sweden)

    Francine eFoo

    2016-04-01

    Full Text Available The auditory cortex is well known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG signals directly from the lateral surface of either the left or right temporal lobe of 8 patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz frequency range within the superior temporal gyrus (STG and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75-200ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity towards dissonance.

  12. Temporal Expectation and Information Processing: A Model-Based Analysis

    Science.gov (United States)

    Jepma, Marieke; Wagenmakers, Eric-Jan; Nieuwenhuis, Sander

    2012-01-01

    People are able to use temporal cues to anticipate the timing of an event, enabling them to process that event more efficiently. We conducted two experiments, using the fixed-foreperiod paradigm (Experiment 1) and the temporal-cueing paradigm (Experiment 2), to assess which components of information processing are speeded when subjects use such…

  13. Cognitive components of regularity processing in the auditory domain.

    Directory of Open Access Journals (Sweden)

    Stefan Koelsch

    Full Text Available BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians were investigated with electroencephalography (EEG. Irregular chords elicited an early right anterior negativity (ERAN in the event-related brain potentials (ERPs. The ERAN had a latency of around 180 ms after the onset of the music-syntactically irregular chords, and had maximum amplitude values over right anterior electrode sites. CONCLUSIONS/SIGNIFICANCE: Because irregular chords were hardly detectable based on acoustical factors (such as pitch repetition and sensory dissonance, this ERAN effect reflects for the most part cognitive (not sensory components of regularity-based, music-syntactic processing. Our study represents a methodological advance compared to previous ERP-studies investigating the neural processing of music-syntactically irregular chords.

  14. Auditory Processing Speed and Signal Detection in Schizophrenia

    Science.gov (United States)

    Korboot, P. J.; Damiani, N.

    1976-01-01

    Two differing explanations of schizophrenic processing deficit were examined: Chapman and McGhie's and Yates'. Thirty-two schizophrenics, classified on the acute-chronic and paranoid-nonparanoid dimensions, and eight neurotics were tested on two dichotic listening tasks. (Editor)

  15. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes.

    Science.gov (United States)

    Lewis, James W; Talkington, William J; Tallaksen, Katherine C; Frum, Chris A

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the

  16. Towards Low-Power On-chip Auditory Processing

    Directory of Open Access Journals (Sweden)

    Paul Hasler

    2005-05-01

    Full Text Available Machine perception is a difficult problem both from a practical or implementation point of view as well as from a theoretical or algorithm point of view. Machine perception systems based on biological perception systems show great promise in many areas but they often have processing requirements and/or data flow requirements that are difficult to implement, especially in small or low-power systems. We propose a system design approach that makes it possible to implement complex functionality using cooperative analog-digital signal processing to lower-power requirements dramatically over digital-only systems, as well as provide an architecture facilitating the development of biologically motivated perception systems. We show the architecture and application development approach. We also present several reference systems for speech recognition, noise suppression, and audio classification.

  17. Auditory Processing in Noise: A Preschool Biomarker for Literacy.

    Directory of Open Access Journals (Sweden)

    Travis White-Schwoch

    2015-07-01

    Full Text Available Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y, we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.

  18. Sequential grouping constraints on across-channel auditory processing

    DEFF Research Database (Denmark)

    Oxenham, Andrew J.; Dau, Torsten

    2005-01-01

    , 1958–1965 (1985)]. Søren explained this surprising result in terms of the spread of masker excitation and across-channel processing of envelope fluctuations. A later study [S. Buus and C. Pan, J. Acoust. Soc. Am. 96, 1445–1457 (1994)] pioneered the use of the same stimuli in tasks where across......, the perceptual segregation of off-frequency from on-frequency components, using sound sequences preceding or following the target, leads to results similar to those found in the absence of the off-frequency components. This suggests a high-level locus for some across-channel effects, and may help provide...

  19. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling.

  20. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    Science.gov (United States)

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  1. Avaliação do processamento auditivo em crianças com dificuldades de aprendizagem Auditory processing evaluation in children with learning difficulties

    Directory of Open Access Journals (Sweden)

    Lucilene Engelmann

    2009-01-01

    Full Text Available OBJETIVO: Esclarecer a relação entre dificuldades de aprendizagem e o transtorno do processamento auditivo em uma turma de segunda série. MÉTODOS: Através da aplicação de testes de leitura os alunos foram classificados quanto à fluência em leitura, sendo um com maior fluência (grupo A e outro com menor fluência (grupo B. Os testes de processamento auditivo foram comparados entre os grupos. RESULTADOS: Todos os participantes apresentaram dificuldades de aprendizagem e transtorno do processamento auditivo em quase todos os subperfis primários. Verificou-se que a variável memória sequencial verbal do grupo de menor fluência em leitura (grupo B foi significantemente melhor (p=0,030. CONCLUSÃO: Questiona-se o diagnóstico de transtorno primário do processamento auditivo e salienta-se a importância da memória sequencial verbal no aprendizado da leitura e escrita. Em face do que foi observado, mais pesquisas deverão ser realizadas objetivando o estudo dessa variável e sua relação com o processamento auditivo temporal.PURPOSE: To clarify the relationship between learning difficulties and auditory processing disorder in second grade students. METHODS: Based on the application of reading tests, the students of a second grade class of an elementary school were classified into two groups, according to their reading fluency: a group with better fluency (group A and another with less fluency (group B. A between-group analysis of the auditory processing tests was carried out. RESULTS: All participants presented learning difficulties and auditory processing disorder in almost every primary subprofiles. It was observed that the verbal sequential memory abilities of the less fluent group (group B was significantly better (p=0,030. CONCLUSION: The diagnosis of primary auditory processing disorder is questioned, and it is emphasized the importance of stimulating verbal sequential memory to the learning of reading and writing abilities. In

  2. Automatic auditory intelligence: an expression of the sensory-cognitive core of cognitive processes.

    Science.gov (United States)

    Näätänen, Risto; Astikainen, Piia; Ruusuvirta, Timo; Huotilainen, Minna

    2010-09-01

    In this article, we present a new view on the nature of cognitive processes suggesting that there is a common core, viz., automatic sensory-cognitive processes that form the basis for higher-order cognitive processes. It has been shown that automatic sensory-cognitive processes are shared by humans and various other species and occur at different developmental stages and even in different states of consciousness. This evidence, based on the automatic electrophysiological change-detection response mismatch negativity (MMN), its magnetoencephalographic equivalent MMNm, and behavioral data, indicates that in audition surprisingly complex processes occur automatically and mainly in the sensory-specific cortical regions. These processes include, e.g. stimulus anticipation and extrapolation, sequential stimulus-rule extraction, and pattern and pitch-interval encoding. Furthermore, these complex perceptual-cognitive processes, first found in waking adults, occur similarly even in sleeping newborns, anesthetized animals, and deeply sedated adult humans, suggesting that they form the common perceptual-cognitive core of cognitive processes in general. Although the present evidence originates mainly from the auditory modality, it is likely that analogous evidence could be obtained from other sensory modalities when measures corresponding to those used in the study of the auditory modality become available.

  3. Neural correlates of accelerated auditory processing in children engaged in music training

    Directory of Open Access Journals (Sweden)

    Assal Habibi

    2016-10-01

    Full Text Available Several studies comparing adult musicians and non-musicians have shown that music training is associated with brain differences. It is unknown, however, whether these differences result from lengthy musical training, from pre-existing biological traits, or from social factors favoring musicality. As part of an ongoing 5-year longitudinal study, we investigated the effects of a music training program on the auditory development of children, over the course of two years, beginning at age 6–7. The training was group-based and inspired by El-Sistema. We compared the children in the music group with two comparison groups of children of the same socio-economic background, one involved in sports training, another not involved in any systematic training. Prior to participating, children who began training in music did not differ from those in the comparison groups in any of the assessed measures. After two years, we now observe that children in the music group, but not in the two comparison groups, show an enhanced ability to detect changes in tonal environment and an accelerated maturity of auditory processing as measured by cortical auditory evoked potentials to musical notes. Our results suggest that music training may result in stimulus specific brain changes in school aged children.

  4. Simple ears-flexible behavior: Information processing in the moth auditory pathway

    Institute of Scientific and Technical Information of China (English)

    Gerit PFUHL; Blanka KALINOVA; Irena VALTEROVA; Bente G.BERG

    2015-01-01

    Lepidoptera evolved tympanic ears in response to echolocating bats.Comparative studies have shown that moth ears evolved many times independently from chordotonal organs.With only 1 to 4 receptor cells,they are one of the simplest hearing organs.The small number of receptors does not imply simplicity,neither in behavior nor in the neural circuit.Behaviorally,the response to ultrasound is far from being a simple reflex.Moths' escape behavior is modulated by a variety of cues,especially pheromones,which can alter the auditory response.Neurally the receptor cell(s) diverges onto many intemeurons,enabling pa rallel processing and feature extraction.Ascending interneurons and sound-sensitive brain neurons innervate a neuropil in the ventrolateral protocerebrum.Further,recent electrophysiological data provides the first glimpses into how the acoustic response is modulated as well as how ultrasound influences the other senses.So far,the auditory pathway has been studied in noctuids.The findings agree well with common computational principles found in other insects.However,moth ears also show unique mechanical and neural adaptation.Here,we first describe the variety of moths' auditory behavior,especially the co-option of ultrasonic signals for intraspecific communication.Second,we describe the current knowledge of the neural pathway gained from noctuid moths.Finally,we argue that Galleriinae which show negative and positive phonotaxis,are an interesting model species for future electrophysiological studies of the auditory pathway and multimodal sensory integration,and so are ideally suited for the study of the evolution of behavioral mechanisms given a few receptors [Current Zoology 61 (2):292-302,2015].

  5. Sensorimotor nucleus NIf is necessary for auditory processing but not vocal motor output in the avian song system.

    Science.gov (United States)

    Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F

    2005-04-01

    Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.

  6. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  7. Electrophysiological assessment of auditory processing disorder in children with non-syndromic cleft lip and/or palate

    Directory of Open Access Journals (Sweden)

    Xiaoran Ma

    2016-08-01

    Full Text Available Objectives Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P. However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft disorders. The present study utilized electrophysiological tests to assess the auditory status of a large group of children with NSCL/P, and investigated whether this group had less robust central auditory processing abilities compared to craniofacially normal children. Methods 146 children with NSCL/P who had normal peripheral hearing thresholds, and 60 craniofacially normal children aged from 6 to 15 years, were recruited. Electrophysiological tests, including auditory brainstem response (ABR, P1-N1-P2 complex, and P300 component recording, were conducted. Results ABR and N1 wave latencies were significantly prolonged in children with NSCL/P. An atypical developmental trend was found for long latency potentials in children with cleft compared to control group children. Children with unilateral cleft lip and palate showed a greater level of abnormal results compared with other cleft subgroups, whereas the cleft lip subgroup had the most robust responses for all tests. Conclusion Children with NSCL/P may have slower than normal neural transmission times between the peripheral auditory nerve and brainstem. Possible delayed development of myelination and synaptogenesis may also influence auditory processing function in this population. Present research outcomes were consistent with previous, smaller sample size, electrophysiological studies on infants and children with cleft lip/palate disorders. In view of the these findings, and reports of educational

  8. Basic auditory processing is related to familial risk, not to reading fluency: an ERP study.

    Science.gov (United States)

    Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L

    2015-02-01

    Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART), intensity and frequency differences was measured with event-related potentials (ERPs). ERP components of interest are components reflective of auditory change detection; the mismatch negativity (MMN) and late discriminative negativity (LDN). All groups had an MMN to changes in ART and frequency, but not to intensity. Our results indicate that fluent readers at risk for dyslexia, poor readers at risk for dyslexia and fluent reading controls have an LDN to changes in ART and frequency, though the scalp activation of frequency processing was different for familial risk children. On intensity, only controls showed an LDN. Contrary to previous findings, our results suggest that neither ART nor frequency processing is related to reading fluency. Furthermore, our results imply that diminished sensitivity to changes in intensity and differential lateralization of frequency processing should be regarded as correlates of being at familial risk for dyslexia, that do not directly relate to reading fluency.

  9. 听觉皮层信号处理%Information processing in auditory cortex

    Institute of Scientific and Technical Information of China (English)

    王晓勤

    2009-01-01

    In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex include two types of important transformations: the non-isomorphic transformation of acoustic features and the transformation from acoustical to perceptual dimensions. Neural representations in auditory cortex are also modulated by auditory feedback and vocal control signals during speaking or vocalization. The challenges facing auditory neuroscientists and biomedical engineers are to understand neural coding mechanisms in the brain underlying such transformations. I will use recent findings from my laboratory to illustrate how acoustic information is processed in the primate auditory cortex and discuss its implications for neural processing of speech and music in the brain as well as for the design of neural prosthetic devices such as cochlear implants. We have used a combination of neurophysiological techniques and quantitative engineering tools to investigate these problems.%听觉系统和视觉系统的不同之处在于:听觉系统在外周感受器和听皮层间具有更长的皮层下通路和更多的突触联系.该特殊结构反应了听觉系统从复杂听觉环境中提取与行为相关信号的机制与其他感觉系统不同.听皮层神经信号处理包括两种重要的转换机制,声音信号的非同构转换以及从声音感受到知觉层面的转换.听觉皮层神经编码机制同时也受到听觉反馈和语言或发声过程中发声信号的调控.听觉神经科学家和生物医学工程师所面临的挑战便是如何去理解大脑中这些转换的编码机制.我将会用我实验

  10. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Miyako

    1988-07-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone.

  11. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten;

    2007-01-01

    . In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model....... The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus IC neurons. In contrast...... to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies BMFs of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units...

  12. Assessment of auditory sensory processing in a neurodevelopmental animal model of schizophrenia-Gating of auditory-evoked potentials and prepulse inhibition

    DEFF Research Database (Denmark)

    Broberg, Brian Villumsen; Oranje, Bob; Yding, Birte;

    2010-01-01

    The use of translational approaches to validate animal models is needed for the development of treatments that can effectively alleviate cognitive impairments associated with schizophrenia, which are unsuccessfully treated by the current available therapies. Deficits in pre-attentive stages...... of sensory information processing seen in schizophrenia patients, can be assessed by highly homologues methods in both humans and rodents, evident by the prepulse inhibition (PPI) of the auditory startle response and the P50 (termed P1 here) suppression paradigms. Treatment with the NMDA receptor antagonist......, in the P1 suppression paradigm in the EEG. The results indicate that early postnatal PCP treatment to rats leads to a reduction in PPI of the acoustic startle response. Furthermore, treated animals were assessed in the P1 suppression paradigm and produced significant changes in auditory-evoked potentials...

  13. Neurite-specific Ca2+ dynamics underlying sound processing in an auditory interneurone.

    Science.gov (United States)

    Baden, T; Hedwig, B

    2007-01-01

    Concepts on neuronal signal processing and integration at a cellular and subcellular level are driven by recording techniques and model systems available. The cricket CNS with the omega-1-neurone (ON1) provides a model system for auditory pattern recognition and directional processing. Exploiting ON1's planar structure we simultaneously imaged free intracellular Ca(2+) at both input and output neurites and recorded the membrane potential in vivo during acoustic stimulation. In response to a single sound pulse the rate of Ca(2+) rise followed the onset spike rate of ON1, while the final Ca(2+) level depended on the mean spike rate. Ca(2+) rapidly increased in both dendritic and axonal arborizations and only gradually in the axon and the cell body. Ca(2+) levels were particularly high at the spike-generating zone. Through the activation of a Ca(2+)-sensitive K(+) current this may exhibit a specific control over the cell's electrical response properties. In all cellular compartments presentation of species-specific calling song caused distinct oscillations of the Ca(2+) level in the chirp rhythm, but not the faster syllable rhythm. The Ca(2+)-mediated hyperpolarization of ON1 suppressed background spike activity between chirps, acting as a noise filter. During directional auditory processing, the functional interaction of Ca(2+)-mediated inhibition and contralateral synaptic inhibition was demonstrated. Upon stimulation with different sound frequencies, the dendrites, but not the axonal arborizations, demonstrated a tonotopic response profile. This mirrored the dominance of the species-specific carrier frequency and resulted in spatial filtering of high frequency auditory inputs.

  14. Mutation of Dcdc2 in mice leads to impairments in auditory processing and memory ability.

    Science.gov (United States)

    Truong, D T; Che, A; Rendall, A R; Szalkowski, C E; LoTurco, J J; Galaburda, A M; Holly Fitch, R

    2014-11-01

    Dyslexia is a complex neurodevelopmental disorder characterized by impaired reading ability despite normal intellect, and is associated with specific difficulties in phonological and rapid auditory processing (RAP), visual attention and working memory. Genetic variants in Doublecortin domain-containing protein 2 (DCDC2) have been associated with dyslexia, impairments in phonological processing and in short-term/working memory. The purpose of this study was to determine whether sensory and behavioral impairments can result directly from mutation of the Dcdc2 gene in mice. Several behavioral tasks, including a modified pre-pulse inhibition paradigm (to examine auditory processing), a 4/8 radial arm maze (to assess/dissociate working vs. reference memory) and rotarod (to examine sensorimotor ability and motor learning), were used to assess the effects of Dcdc2 mutation. Behavioral results revealed deficits in RAP, working memory and reference memory in Dcdc2(del2/del2) mice when compared with matched wild types. Current findings parallel clinical research linking genetic variants of DCDC2 with specific impairments of phonological processing and memory ability.

  15. Changes in Electroencephalogram Approximate Entropy Reflect Auditory Processing and Functional Complexity in Frogs

    Institute of Scientific and Technical Information of China (English)

    Yansu LIU; Yanzhu FAN; Fei XUE; Xizi YUE; Steven E BRAUTH; Yezhong TANG; Guangzhan FANG

    2016-01-01

    Brain systems engage in what are generally considered to be among the most complex forms of information processing. In the present study, we investigated the functional complexity of anuran auditory processing using the approximate entropy (ApEn) protocol for electroencephalogram (EEG) recordings from the forebrain and midbrain while male and female music frogs (Babina daunchina) listened to acoustic stimuli whose biological significance varied. The stimuli used were synthesized white noise (reflecting a novel signal), conspecific male advertisement calls with either high or low sexual attractiveness (relfecting sexual selection) and silence (relfecting a baseline). The results showed that 1) ApEn evoked by conspeciifc calls exceeded ApEn evoked by synthesized white noise in the left mesencephalon indicating this structure plays a critical role in processing acoustic signals with biological signiifcance;2) ApEn in the mesencephalon was significantly higher than for the telencephalon, consistent with the fact that the anuran midbrain contains a large well-organized auditory nucleus (torus semicircularis) while the forebrain does not; 3) for females ApEn in the mesencephalon was signiifcantly different than that of males, suggesting that males and females process biological stimuli related to mate choice differently.

  16. Cochlear Delay and Medial Olivocochlear Functioning in Children with Suspected Auditory Processing Disorder.

    Directory of Open Access Journals (Sweden)

    Sriram Boothalingam

    Full Text Available Behavioral manifestations of processing deficits associated with auditory processing disorder (APD have been well documented. However, little is known about their anatomical underpinnings, especially cochlear processing. Cochlear delays, a proxy for cochlear tuning, measured using stimulus frequency otoacoustic emission (SFOAE group delay, and the influence of the medial olivocochlear (MOC system activation at the auditory periphery was studied in 23 children suspected with APD (sAPD and 22 typically developing (TD children. Results suggest that children suspected with APD have longer SFOAE group delays (possibly due to sharper cochlear tuning and reduced MOC function compared to TD children. Other differences between the groups include correlation between MOC function and SFOAE delay in quiet in the TD group, and lack thereof in the sAPD group. MOC-mediated changes in SFOAE delay were in opposite directions between groups: increase in delay in TD vs. reduction in delay in the sAPD group. Longer SFOAE group delays in the sAPD group may lead to longer cochlear filter ringing, and potential increase in forward masking. These results indicate differences in cochlear and MOC function between sAPD and TD groups. Further studies are warranted to explore the possibility of cochlea as a potential site for processing deficits in APD.

  17. The effectiveness of imagery and sentence strategy instructions as a function of visual and auditory processing in young school-age children.

    Science.gov (United States)

    Weed, K; Ryan, E B

    1985-12-01

    The relationship between auditory and visual processing modality and strategy instructions was examined in first- and second-grade children. A Pictograph Sentence Memory Test was used to determine dominant processing modality as well as to assess instructional effects. The pictograph task was given first followed by auditory or visual interference. Children who were disrupted more by visual interference were classed as visual processors and those more disrupted by auditory interference were classed as auditory processors. Auditory and visual processors were then assigned to one of three conditions: interactive imagery strategy, sentence strategy, or a control group. Children in the imagery and sentence strategy groups were briefly taught to integrate the pictographs in order to remember them better. The sentence strategy was found to be effective for both auditory and visual processors, whereas the interactive imagery strategy was effective only for auditory processors.

  18. Chinese-English bilinguals processing temporal-spatial metaphor.

    Science.gov (United States)

    Xue, Jin; Yang, Jie; Zhao, Qian

    2014-08-01

    The conceptual projection of time onto the domain of space constitutes one of the most challenging issues in the cognitive embodied theories. In Chinese, spatial order (e.g.,/da shu qian/, in front of a tree) shares the same terms with temporal sequence (", /san yue qian/, before March). In comparison, English natives use different sets of prepositions to describe spatial and temporal relationship, i.e., "before" to express temporal sequencing and "in front of" to express spatial order. The linguistic variations regarding the specific lexical encodings indicate that some flexibility might be available in how space-time parallelisms are formulated across different languages. In the present study, ERP (Event-related potentials) data were collected when Chinese-English bilinguals processed temporal ordering and spatial sequencing in both their first language (L1) Chinese (Experiment 1) and the second language (L2) English (Experiment 2). It was found that, despite the different lexical encodings, early sensorimotor simulation plays a role in temporal sequencing processing in both L1 Chinese and L2 English. The findings well support the embodied theory that conceptual knowledge is grounded in sensory-motor systems (Gallese and Lakoff, Cogn Neuropsychol 22:455-479, 2005). Additionally, in both languages, neural representations during comprehending temporal sequencing and spatial ordering are different. The time-spatial relationship is asymmetric, in that space schema could be imported into temporal sequence processing but not vice versa. These findings support the weak view of the Metaphoric Mapping Theory.

  19. Impairments of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  20. Auditory target processing in methadone substituted opiate addicts: The effect of nicotine in controls

    Directory of Open Access Journals (Sweden)

    Zerbin Dieter

    2007-11-01

    Full Text Available Abstract Background The P300 component of the auditory evoked potential is an indicator of attention dependent target processing. Only a few studies have assessed cognitive function in substituted opiate addicts by means of evoked potential recordings. In addition, P300 data suggest that chronic nicotine use reduces P300 amplitudes. While nicotine and opiate effects combine in addicted subjects, here we investigated the P300 component of the auditory event related potential in methadone substituted opiate addicts with and without concomitant non-opioid drug use in comparison to a group of control subjects with and without nicotine consumption. Methods We assessed 47 opiate addicted out-patients under current methadone substitution and 65 control subjects matched for age and gender in an 2-stimulus auditory oddball paradigm. Patients were grouped for those with and without additional non-opioid drug use and controls were grouped for current nicotine use. P300 amplitude and latency data were analyzed at electrodes Fz, Cz and Pz. Results Patients and controls did not differ with regard to P300 amplitudes and latencies when whole groups were compared. Subgroup analyses revealed significantly reduced P300 amplitudes in controls with nicotine use when compared to those without. P300 amplitudes of methadone substituted opiate addicts were in between the two control groups and did not differ with regard to additional non-opioid use. Controls with nicotine had lower P300 amplitudes when compared to patients with concomitant non-opioid drugs. No P300 latency effects were found. Conclusion Attention dependent target processing as indexed by the P300 component amplitudes and latencies is not reduced in methadone substituted opiate addicts when compared to controls. The effect of nicotine on P300 amplitudes in healthy subjects exceeds the effects of long term opioid addiction under methadone substitution.

  1. How does experience modulate auditory spatial processing in individuals with blindness?

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  2. Oxytocin receptor gene associated with the efficiency of social auditory processing

    Directory of Open Access Journals (Sweden)

    Mattie eTops

    2011-11-01

    Full Text Available Oxytocin has been shown to facilitate social aspects of sensory processing, thereby enhancing social communicative behaviors and empathy. Here we report that compared to the AA/AG genotypes, the presumably more efficient GG genotype of an oxytocin receptor gene polymorphism (OXTR rs53576 that has previously been associated with increased sensitivity of social processing is related to less self-reported difficulty in hearing and understanding people when there is background noise. The present result extends associations between oxytocin and social processing to the auditory and vocal domain. We discuss the relevance of our findings for autistic spectrum disorders (ASD, as ASD seems related to specific impairments in the orienting to, and selection of speech sounds from background noise, and some social processing impairments in patients with ASD have been found responsive to oxytocin treatment.

  3. Multimodal imaging of temporal processing in typical and atypical language development.

    Science.gov (United States)

    Kovelman, Ioulia; Wagley, Neelima; Hay, Jessica S F; Ugolini, Margaret; Bowyer, Susan M; Lajiness-O'Neill, Renee; Brennan, Jonathan

    2015-03-01

    New approaches to understanding language and reading acquisition propose that the human brain's ability to synchronize its neural firing rate to syllable-length linguistic units may be important to children's ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English-speaking children (aged 6-12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between children's amplitude rise-time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brain's mechanisms that support language and reading acquisition during both typical and atypical development.

  4. Processamento auditivo em idosos: implicações e soluções Auditory processing in elderly: implications and solutions

    Directory of Open Access Journals (Sweden)

    Leonardo Henrique Buss

    2010-02-01

    Full Text Available TEMA: processamento auditivo em idosos. OBJETIVO: estudar, através de uma revisão teórica, o processamento auditivo em idosos, as desordens que o envelhecimento auditivo causam, bem como os recursos para reduzir as defasagens nas habilidades auditivas envolvidas no processamento auditivo. CONCLUSÃO: vários são os desajustes ocasionados pela desordem do processamento auditivo em idosos. É necessária a continuidade de estudos científicos nessa área para aplicar adequadas medidas intervencionistas, a fim de garantir a reabilitação do indivíduo a tempo de minimizar os efeitos da desordem auditiva sobre o mesmo.BACKGROUND: auditory processing in elderly. PURPOSE: to promote a theoretical approach on auditory processing in elderly people, the disorders caused by hearing aging, as well as the resources to minimize the auditory aging impairment of the hearing abilities involved in the auditory processing. CONCLUSION: the alterations caused by auditory processing disorder in elderly people are many. It is necessary to continue researching in this field in order to apply adequate interventionist measures, in order to assure the rehabilitation of the individual in time to minimize the effects of the hearing disorder.

  5. Right cerebral hemisphere and central auditory processing in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Paulina C. Murphy-Ruiz

    2013-11-01

    Full Text Available Objective We hypothesized that if the right hemisphere auditory processing abilities can be altered in children with developmental dyslexia (DD, we can detect dysfunction using specific tests. Method We performed an analytical comparative cross-sectional study. We studied 20 right-handed children with DD and 20 healthy right-handed control subjects (CS. Children in both groups were age, gender, and school-grade matched. Focusing on the right hemisphere’s contribution, we utilized tests to measure alterations in central auditory processing (CAP, such as determination of frequency patterns; sound duration; music pitch recognition; and identification of environmental sounds. We compared results among the two groups. Results Children with DD showed lower performance than CS in all CAP subtests, including those that preferentially engaged the cerebral right hemisphere. Conclusion Our data suggests a significant contribution of the right hemisphere in alterations of CAP in children with DD. Thus, right hemisphere CAP must be considered for examination and rehabilitation of children with DD.

  6. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  7. Dynamics of Neural Responses in Ferret Primary Auditory Cortex: I. Spectro-Temporal Response Field Characterization by Dynamic Ripple Spectra

    Science.gov (United States)

    1999-01-01

    Eggermont 1993 and references therein; Kvale and Schreiner 1995; Kowalski et al. 1996a; deCharms et al. 1998; Escabi and Schreiner 1999; Theunissen et al...Neurophysiol. 76, 3524–3534. Kvale , M. and C. E. Schreiner (1995). Perturbative m-sequences for auditory systems identification. Acustica 81. Mendelson

  8. Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus.

    Science.gov (United States)

    Foo, Francine; King-Stephens, David; Weber, Peter; Laxer, Kenneth; Parvizi, Josef; Knight, Robert T

    2016-01-01

    The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals directly from the lateral surface of either the left or right temporal lobe of eight patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz) frequency range within the superior temporal gyrus (STG) and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75 and 200 ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords, and a positive correlation between changes in γhigh power and the degree of stimulus roughness was observed in both hemispheres. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity toward dissonance.

  9. Behavioral Signs of (Central) Auditory Processing Disorder in Children With Nonsyndromic Cleft Lip and/or Palate: A Parental Questionnaire Approach.

    Science.gov (United States)

    Ma, Xiaoran; McPherson, Bradley; Ma, Lian

    2016-03-01

    Objective Children with nonsyndromic cleft lip and/or palate often have a high prevalence of middle ear dysfunction. However, there are also indications that they may have a higher prevalence of (central) auditory processing disorder. This study used Fisher's Auditory Problems Checklist for caregivers to determine whether children with nonsyndromic cleft lip and/or palate have potentially more auditory processing difficulties compared with craniofacially normal children. Methods Caregivers of 147 school-aged children with nonsyndromic cleft lip and/or palate were recruited for the study. This group was divided into three subgroups: cleft lip, cleft palate, and cleft lip and palate. Caregivers of 60 craniofacially normal children were recruited as a control group. Hearing health tests were conducted to evaluate peripheral hearing. Caregivers of children who passed this assessment battery completed Fisher's Auditory Problems Checklist, which contains 25 questions related to behaviors linked to (central) auditory processing disorder. Results Children with cleft palate showed the lowest scores on the Fisher's Auditory Problems Checklist questionnaire, consistent with a higher index of suspicion for (central) auditory processing disorder. There was a significant difference in the manifestation of (central) auditory processing disorder-linked behaviors between the cleft palate and the control groups. The most common behaviors reported in the nonsyndromic cleft lip and/or palate group were short attention span and reduced learning motivation, along with hearing difficulties in noise. Conclusion A higher occurrence of (central) auditory processing disorder-linked behaviors were found in children with nonsyndromic cleft lip and/or palate, particularly cleft palate. Auditory processing abilities should not be ignored in children with nonsyndromic cleft lip and/or palate, and it is necessary to consider assessment tests for (central) auditory processing disorder when an

  10. On spatio-temporal Lévy based Cox processes

    DEFF Research Database (Denmark)

    Prokesova, Michaela; Hellmund, Gunnar; Jensen, Eva Bjørn Vedel

    2006-01-01

    The paper discusses a new class of models for spatio-temporal Cox point processes. In these models, the driving field is defined by means of an integral of a weight function with respect to a Lévy basis. The relations to other Cox process models studied previously are discussed and formulas for t...

  11. MODIS multi-temporal data retrieval and processing toolbox

    NARCIS (Netherlands)

    Mattiuzzi, M.; Verbesselt, J.; Klisch, A.

    2012-01-01

    The package functionalities are focused for the download and processing of multi-temporal datasets from MODIS sensors. All standard MODIS grid data can be accessed and processed by the package routines. The package is still in alpha development and not all the functionalities are available for now.

  12. IMPAIRED PROCESSING IN THE PRIMARY AUDITORY CORTEX OF AN ANIMAL MODEL OF AUTISM

    Directory of Open Access Journals (Sweden)

    Renata eAnomal

    2015-11-01

    Full Text Available Autism is a neurodevelopmental disorder clinically characterized by deficits in communication, lack of social interaction and, repetitive behaviors with restricted interests. A number of studies have reported that sensory perception abnormalities are common in autistic individuals and might contribute to the complex behavioral symptoms of the disorder. In this context, hearing incongruence is particularly prevalent. Considering that some of this abnormal processing might stem from the unbalance of inhibitory and excitatory drives in brain circuitries, we used an animal model of autism induced by valproic acid (VPA during pregnancy in order to investigate the tonotopic organization of the primary auditory cortex (AI and its local inhibitory circuitry. Our results show that VPA rats have distorted primary auditory maps with over-representation of high frequencies, broadly tuned receptive fields and higher sound intensity thresholds as compared to controls. However, we did not detect differences in the number of parvalbumin-positive interneurons in AI of VPA and control rats. Altogether our findings show that neurophysiological impairments of hearing perception in this autism model occur independently of alterations in the number of parvalbumin-expressing interneurons. These data support the notion that fine circuit alterations, rather than gross cellular modification, could lead to neurophysiological changes in the autistic brain.

  13. The role of the cerebellum in auditory processing using the SSI test A participação do cerebelo no processamento auditivo com o uso do teste SSI

    OpenAIRE

    Patricia Maria Sens; Clemente Isnard Ribeiro de Almeida; Marisa Mara Neves de Souza; Gonçalves,Josyane Borges A.; Luiz Claudio do Carmo

    2011-01-01

    The Synthetic Sentence Identification (SSI) test assesses central auditory pathways by measuring auditory and visual sensitivity and testing selective attention. Cerebellum activation in auditory attention and sensorial activity modulation have already been described. Assessing patients with cerebellar lesions alone using the SSI test can confirm the role of the cerebellum in auditory processing. AIM: To evaluate the role of the cerebellum in auditory processing in individuals with normal hea...

  14. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  15. At the interface of the auditory and vocal motor systems: NIf and its role in vocal processing, production and learning.

    Science.gov (United States)

    Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc

    2013-06-01

    Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans.

  16. Neural Response Properties of Primary, Rostral, and Rostrotemporal Core Fields in the Auditory Cortex of Marmoset Monkeys

    OpenAIRE

    Bendor, Daniel; WANG, Xiaoqin

    2008-01-01

    The core region of primate auditory cortex contains a primary and two primary-like fields (AI, primary auditory cortex; R, rostral field; RT, rostrotemporal field). Although it is reasonable to assume that multiple core fields provide an advantage for auditory processing over a single primary field, the differential roles these fields play and whether they form a functional pathway collectively such as for the processing of spectral or temporal information are unknown. In this report we compa...

  17. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  18. Temporal processing deficits in letter-by-letter reading.

    Science.gov (United States)

    Ingles, Janet L; Eskes, Gail A

    2007-01-01

    Theories of the cognitive impairment underlying letter-by-letter reading vary widely, including prelexical and lexical level deficits. One prominent prelexical account proposes that the disorder results from difficulty in processing multiple letters simultaneously. We investigated whether this deficit extends to letters presented in rapid temporal succession. A letter-by-letter reader, G.M., was administered a rapid serial visual presentation task that has been used widely to study the temporal processing characteristics of the normal visual system. Comparisons were made to a control group of 6 brain-damaged individuals without reading deficits. Two target letters were embedded at varying temporal positions in a stream of rapidly presented single digits. After each stream, the identities of the two letters were reported. G.M. required an extended period of time after he had processed one letter before he was able to reliably identify a second letter, relative to the controls. In addition, G.M.'s report of the second letter was most impaired when it immediately followed the first letter, a pattern not seen in the controls, indicating that G.M. had difficulty processing the two items together. These data suggest that a letter-by-letter reading strategy may be adopted to help compensate for a deficit in the temporal processing of letters.

  19. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  20. Temporal event-structure coding in developmental dyslexia: Evidence from explicit and implicit temporal processes

    Directory of Open Access Journals (Sweden)

    Elliott Mark A.

    2010-01-01

    Full Text Available As an alternative to theories positing visual or phonological deficits it has been suggested that the aetiology of dyslexia takes the form of a temporal processing deficit that may refer to impairment in the functional connectivity of the processes involved in reading. Here we investigated this idea in an experimental task designed to measure simultaneity thresholds. Fifteen children diagnosed with developmental dyslexia, alongside a matched sample of 13 normal readers undertook a series of threshold determination procedures designed to locate visual simultaneity thresholds and to assess the influence of subthreshold synchrony or asynchrony upon these thresholds. While there were no significant differences in simultaneity thresholds between dyslexic and normal readers, indicating no evidence of an altered perception, or temporal quantization of events, the dyslexic readers reported simultaneity significantly less frequently than normal readers, with the reduction largely attributable presentation of a subthreshold asynchrony. The results are discussed in terms of a whole systems approach to maintaining information processing integrity.

  1. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    Science.gov (United States)

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  2. Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand

    Science.gov (United States)

    Lount, Sarah A.; Purdy, Suzanne C.; Hand, Linda

    2017-01-01

    Purpose: International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Method: Thirty-three male YORs, aged 14-17 years, were recruited from 2 youth…

  3. Temporal processing in postlingual adult users of cochlear implant

    Directory of Open Access Journals (Sweden)

    Maycon Duarte

    Full Text Available ABSTRACT INTRODUCTION: Postlingual adults demonstrate impressive performance in speech recognition in silence after cochlear implant (CI surgery. However, problems in central hearing abilities remain, which complicates understanding in certain situations, such as in competitive listening and in the perception of suprasegmental aspects of speech. OBJECTIVE: To assess the temporal processing abilities in postlingual adult users of CI. METHODS: Cross-sectional and descriptive study, with a non-probabilistic sample for convenience. The population was divided into two groups. The study group consisted of 12 postlingual adult users of cochlear implants and the control group consisted of 12 adults with normal hearing, matched for age and gender with the control group. The Frequency Pattern Test and the Gaps in Noise test were selected to assess temporal processing. Free-field testing was applied at 50 dB SL. RESULTS: Adult users of cochlear implant attained a mean temporal threshold of 16.33 ms and scored 47.7% in the pattern frequency test; the difference was statistically significant in comparison with the control group. CONCLUSION: It was verified that postlingual adult users of cochlear implants have significant alterations in temporal processing abilities in comparison to adults with normal hearing.

  4. Cerebro-cerebellar interactions underlying temporal information processing.

    Science.gov (United States)

    Aso, Kenji; Hanakawa, Takashi; Aso, Toshihiko; Fukuyama, Hidenao

    2010-12-01

    The neural basis of temporal information processing remains unclear, but it is proposed that the cerebellum plays an important role through its internal clock or feed-forward computation functions. In this study, fMRI was used to investigate the brain networks engaged in perceptual and motor aspects of subsecond temporal processing without accompanying coprocessing of spatial information. Direct comparison between perceptual and motor aspects of time processing was made with a categorical-design analysis. The right lateral cerebellum (lobule VI) was active during a time discrimination task, whereas the left cerebellar lobule VI was activated during a timed movement generation task. These findings were consistent with the idea that the cerebellum contributed to subsecond time processing in both perceptual and motor aspects. The feed-forward computational theory of the cerebellum predicted increased cerebro-cerebellar interactions during time information processing. In fact, a psychophysiological interaction analysis identified the supplementary motor and dorsal premotor areas, which had a significant functional connectivity with the right cerebellar region during a time discrimination task and with the left lateral cerebellum during a timed movement generation task. The involvement of cerebro-cerebellar interactions may provide supportive evidence that temporal information processing relies on the simulation of timing information through feed-forward computation in the cerebellum.

  5. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  6. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  7. The Relationship between Auditory Processing and Restricted, Repetitive Behaviors in Adults with Autism Spectrum Disorders

    Science.gov (United States)

    Kargas, Niko; López, Beatriz; Reddy, Vasudevi; Morris, Paul

    2015-01-01

    Current views suggest that autism spectrum disorders (ASDs) are characterised by enhanced low-level auditory discrimination abilities. Little is known, however, about whether enhanced abilities are universal in ASD and how they relate to symptomatology. We tested auditory discrimination for intensity, frequency and duration in 21 adults with ASD…

  8. Auditory Brain Stem Processing in Reptiles and Amphibians: Roles of Coupled Ears

    DEFF Research Database (Denmark)

    Willis, Katie L.; Christensen-Dalsgaard, Jakob; Carr, Catherine

    2014-01-01

    of anurans (frogs), reptiles (including birds), and mammals should all be more similar within each group than among the groups. Although there is large variation in the peripheral auditory system, there is evidence that auditory brain stem nuclei in tetrapods are homologous and have similar functions among...

  9. Frequency processing at consecutive levels in the auditory system of bush crickets (tettigoniidae).

    Science.gov (United States)

    Ostrowski, Tim Daniel; Stumpner, Andreas

    2010-08-01

    We asked how processing of male signals in the auditory pathway of the bush cricket Ancistrura nigrovittata (Phaneropterinae, Tettigoniidae) changes from the ear to the brain. From 37 sensory neurons in the crista acustica single elements (cells 8 or 9) have frequency tuning corresponding closely to the behavioral tuning of the females. Nevertheless, one-quarter of sensory neurons (approximately cells 9 to 18) excite the ascending neuron 1 (AN1), which is best tuned to the male's song carrier frequency. AN1 receives frequency-dependent inhibition, reducing sensitivity especially in the ultrasound. When recorded in the brain, AN1 shows slightly lower overall activity than when recorded in the prothoracic ganglion close to the spike-generating zone. This difference is significant in the ultrasonic range. The first identified local brain neuron in a bush cricket (LBN1) is described. Its dendrites overlap with some of AN1-terminations in the brain. Its frequency tuning and intensity dependence strongly suggest a direct postsynaptic connection to AN1. Spiking in LBN1 is only elicited after summation of excitatory postsynaptic potentials evoked by individual AN1-action potentials. This serves a filtering mechanism that reduces the sensitivity of LBN1 and also its responsiveness to ultrasound as compared to AN1. Consequently, spike latencies of LBN1 are long (>30 ms) despite its being a second-order interneuron. Additionally, LBN1 receives frequency-specific inhibition, most likely further reducing its responses to ultrasound. This demonstrates that frequency-specific inhibition is redundant in two directly connected interneurons on subsequent levels in the auditory system.

  10. Temporal and Location Based RFID Event Data Management and Processing

    Science.gov (United States)

    Wang, Fusheng; Liu, Peiya

    Advance of sensor and RFID technology provides significant new power for humans to sense, understand and manage the world. RFID provides fast data collection with precise identification of objects with unique IDs without line of sight, thus it can be used for identifying, locating, tracking and monitoring physical objects. Despite these benefits, RFID poses many challenges for data processing and management. RFID data are temporal and history oriented, multi-dimensional, and carrying implicit semantics. Moreover, RFID applications are heterogeneous. RFID data management or data warehouse systems need to support generic and expressive data modeling for tracking and monitoring physical objects, and provide automated data interpretation and processing. We develop a powerful temporal and location oriented data model for modeling and queryingRFID data, and a declarative event and rule based framework for automated complex RFID event processing. The approach is general and can be easily adapted for different RFID-enabled applications, thus significantly reduces the cost of RFID data integration.

  11. Research on Process-oriented Spatio-temporal Data Model

    Directory of Open Access Journals (Sweden)

    XUE Cunjin

    2016-02-01

    Full Text Available According to the analysis of the present status and existing problems of spatio-temporal data models developed in last 20 years,this paper proposes a process-oriented spatio-temporal data model (POSTDM,aiming at representing,organizing and storing continuity and gradual geographical entities. The dynamic geographical entities are graded and abstracted into process objects series from their intrinsic characteristics,which are process objects,process stage objects,process sequence objects and process state objects. The logical relationships among process entities are further studied and the structure of UML models and storage are also designed. In addition,through the mechanisms of continuity and gradual changes impliedly recorded by process objects,and the modes of their procedure interfaces offered by the customized ObjcetStorageTable,the POSTDM can carry out process representation,storage and dynamic analysis of continuity and gradual geographic entities. Taking a process organization and storage of marine data as an example,a prototype system (consisting of an object-relational database and a functional analysis platform is developed for validating and evaluating the model's practicability.

  12. Cognitive processing effects on auditory event-related potentials and the evoked cardiac response.

    Science.gov (United States)

    Lawrence, Carlie A; Barry, Robert J

    2010-11-01

    The phasic evoked cardiac response (ECR) produced by innocuous stimuli requiring cognitive processing may be described as the sum of two independent response components. An initial heart rate (HR) deceleration (ECR1), and a slightly later HR acceleration (ECR2), have been hypothesised to reflect stimulus registration and cognitive processing load, respectively. This study investigated the effects of processing load in the ECR and the event-related potential, in an attempt to find similarities between measures found important in the autonomic orienting reflex context and ERP literature. We examined the effects of cognitive load within-subjects, using a long inter-stimulus interval (ISI) ANS-style paradigm. Subjects (N=40) were presented with 30-35 80dB, 1000Hz tones with a variable long ISI (7-9s), and required to silently count, or allowed to ignore, the tone in two counterbalanced stimulus blocks. The ECR showed a significant effect of counting, allowing separation of the two ECR components by subtracting the NoCount from the Count condition. The auditory ERP showed the expected obligatory processing effects in the N1, and substantial effects of cognitive load in the late positive complex (LPC). These data offer support for ANS-CNS connections worth pursuing further in future work.

  13. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    Science.gov (United States)

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of

  14. Effects of parietal TMS on visual and auditory processing at the primary cortical level -- a concurrent TMS-fMRI study.

    Science.gov (United States)

    Leitão, Joana; Thielscher, Axel; Werner, Sebastian; Pohmann, Rolf; Noppeney, Uta

    2013-04-01

    Accumulating evidence suggests that multisensory interactions emerge already at the primary cortical level. Specifically, auditory inputs were shown to suppress activations in visual cortices when presented alone but amplify the blood oxygen level-dependent (BOLD) responses to concurrent visual inputs (and vice versa). This concurrent transcranial magnetic stimulation-functional magnetic resonance imaging (TMS-fMRI) study applied repetitive TMS trains at no, low, and high intensity over right intraparietal sulcus (IPS) and vertex to investigate top-down influences on visual and auditory cortices under 3 sensory contexts: visual, auditory, and no stimulation. IPS-TMS increased activations in auditory cortices irrespective of sensory context as a result of direct and nonspecific auditory TMS side effects. In contrast, IPS-TMS modulated activations in the visual cortex in a state-dependent fashion: it deactivated the visual cortex under no and auditory stimulation but amplified the BOLD response to visual stimulation. However, only the response amplification to visual stimulation was selective for IPS-TMS, while the deactivations observed for IPS- and Vertex-TMS resulted from crossmodal deactivations induced by auditory activity to TMS sounds. TMS to IPS may increase the responses in visual (or auditory) cortices to visual (or auditory) stimulation via a gain control mechanism or crossmodal interactions. Collectively, our results demonstrate that understanding TMS effects on (uni)sensory processing requires a multisensory perspective.

  15. The role of the medial temporal limbic system in processing emotions in voice and music.

    Science.gov (United States)

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.

  16. Temporal information processing and its relation to executive functions in elderly individuals

    Directory of Open Access Journals (Sweden)

    Kamila Nowak

    2016-10-01

    Full Text Available Normal aging triggers deterioration in cognitive functions. Evidence has shown that these age-related changes concern also executive functions (EF as well as temporal information processing (TIP in a millisecond range. A considerable amount of literature data has indicated that each of these two functions sets a frame for our mental activity and may be considered in terms of embodied cognition due to advanced age. The present study addresses the question whether in elderly subjects the efficiency of TIP is related to individual differences in EF. The study involved 53 normal healthy participants aged from 65 to 78. In these subjects TIP was assessed by sequencing abilities measured with temporal-order threshold (TOT. It is defined as the minimum time gap separating two auditory stimuli presented in rapid succession which is necessary for a subject to report correctly their temporal order, thus the relation ‘before-after’. The EF were assessed with regard to the efficiency of the executive planning measured with the Tower of London-Drexel University (TOLDX which has become a well-known EF task. Using Spearman’s rank correlations we observed two main results. Firstly, the indices of the TOLDX indicated a coherent construct reflecting the effectiveness of executive planning in the elderly. Initiation time seemed dissociated from these coherent indices, which suggested a specific strategy of mental planning in the elderly based on on-line planning rather than on preplanning. Secondly, TOT was significantly correlated with the indices of TOLDX. Although some of these correlations were modified by subject’s age, the correlation between TOT and the main index of TOLDX (‘Total Move Score’ was rather age resistant. These results suggest that normal aging may be characterized by an overlapping of deteriorated TIP and deteriorated EF.

  17. Instantaneous and Frequency-Warped Signal Processing Techniques for Auditory Source Separation.

    Science.gov (United States)

    Wang, Avery Li-Chun

    This thesis summarizes several contributions to the areas of signal processing and auditory source separation. The philosophy of Frequency-Warped Signal Processing is introduced as a means for separating the AM and FM contributions to the bandwidth of a complex-valued, frequency-varying sinusoid p (n), transforming it into a signal with slowly-varying parameters. This transformation facilitates the removal of p (n) from an additive mixture while minimizing the amount of damage done to other signal components. The average winding rate of a complex-valued phasor is explored as an estimate of the instantaneous frequency. Theorems are provided showing the robustness of this measure. To implement frequency tracking, a Frequency-Locked Loop algorithm is introduced which uses the complex winding error to update its frequency estimate. The input signal is dynamically demodulated and filtered to extract the envelope. This envelope may then be remodulated to reconstruct the target partial, which may be subtracted from the original signal mixture to yield a new, quickly-adapting form of notch filtering. Enhancements to the basic tracker are made which, under certain conditions, attain the Cramer -Rao bound for the instantaneous frequency estimate. To improve tracking, the novel idea of Harmonic -Locked Loop tracking, using N harmonically constrained trackers, is introduced for tracking signals, such as voices and certain musical instruments. The estimated fundamental frequency is computed from a maximum-likelihood weighting of the N tracking estimates, making it highly robust. The result is that harmonic signals, such as voices, can be isolated from complex mixtures in the presence of other spectrally overlapping signals. Additionally, since phase information is preserved, the resynthesized harmonic signals may be removed from the original mixtures with relatively little damage to the residual signal. Finally, a new methodology is given for designing linear-phase FIR filters

  18. Fast-spiking GABA circuit dynamics in the auditory cortex predict recovery of sensory processing following peripheral nerve damage.

    Science.gov (United States)

    Resnik, Jennifer; Polley, Daniel B

    2017-03-21

    Cortical neurons remap their receptive fields and rescale sensitivity to spared peripheral inputs following sensory nerve damage. To address how these plasticity processes are coordinated over the course of functional recovery, we tracked receptive field reorganization, spontaneous activity, and response gain from individual principal neurons in the adult mouse auditory cortex over a 50-day period surrounding either moderate or massive auditory nerve damage. We related the day-by-day recovery of sound processing to dynamic changes in the strength of intracortical inhibition from parvalbumin-expressing (PV) inhibitory neurons. Whereas the status of brainstem-evoked potentials did not predict the recovery of sensory responses to surviving nerve fibers, homeostatic adjustments in PV-mediated inhibition during the first days following injury could predict the eventual recovery of cortical sound processing weeks later. These findings underscore the potential importance of self-regulated inhibitory dynamics for the restoration of sensory processing in excitatory neurons following peripheral nerve injuries.

  19. Differential bilateral involvement of the parietal gyrus during predicative metaphor processing: an auditory fMRI study.

    Science.gov (United States)

    Obert, Alexandre; Gierski, Fabien; Calmus, Arnaud; Portefaix, Christophe; Declercq, Christelle; Pierot, Laurent; Caillies, Stéphanie

    2014-10-01

    Despite the growing literature on figurative language processing, there is still debate as to which cognitive processes and neural bases are involved. Furthermore, most studies have focused on nominal metaphor processing without any context, and very few have used auditory presentation. We therefore investigated the neural bases of the comprehension of predicative metaphors presented in a brief context, in an auditory, ecological way. The comprehension of their literal counterparts served as a control condition. We also investigated the link between working memory and verbal skills and regional activation. Comparisons of metaphorical and literal conditions revealed bilateral activation of parietal areas including the left angular (lAG) and right inferior parietal gyri (rIPG) and right precuneus. Only verbal skills were associated with lAG (but not rIPG) activation. These results indicated that predicative metaphor comprehension share common activations with other metaphors. Furthermore, individual verbal skills could have an impact on figurative language processing.

  20. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory.

    Science.gov (United States)

    Bigand, Emmanuel; Delbé, Charles; Poulin-Charronnat, Bénédicte; Leman, Marc; Tillmann, Barbara

    2014-01-01

    During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.

  1. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  2. Dynamic temporal signal processing in the inferior colliculus of echolocating bats

    Science.gov (United States)

    Jen, Philip H.-S.; Wu, Chung Hsin; Wang, Xin

    2012-01-01

    In nature, communication sounds among animal species including humans are typical complex sounds that occur in sequence and vary with time in several parameters including amplitude, frequency, duration as well as separation, and order of individual sounds. Among these multiple parameters, sound duration is a simple but important one that contributes to the distinct spectral and temporal attributes of individual biological sounds. Likewise, the separation of individual sounds is an important temporal attribute that determines an animal's ability in distinguishing individual sounds. Whereas duration selectivity of auditory neurons underlies an animal's ability in recognition of sound duration, the recovery cycle of auditory neurons determines a neuron's ability in responding to closely spaced sound pulses and therefore, it underlies the animal's ability in analyzing the order of individual sounds. Since the multiple parameters of naturally occurring communication sounds vary with time, the analysis of a specific sound parameter by an animal would be inevitably affected by other co-varying sound parameters. This is particularly obvious in insectivorous bats, which rely on analysis of returning echoes for prey capture when they systematically vary the multiple pulse parameters throughout a target approach sequence. In this review article, we present our studies of dynamic variation of duration selectivity and recovery cycle of neurons in the central nucleus of the inferior colliculus of the frequency-modulated bats to highlight the dynamic temporal signal processing of central auditory neurons. These studies use single pulses and three biologically relevant pulse-echo (P-E) pairs with varied duration, gap, and amplitude difference similar to that occurring during search, approach, and terminal phases of hunting by bats. These studies show that most collicular neurons respond maximally to a best tuned sound duration (BD). The sound duration to which these neurons are

  3. Lateralization of Music Processing with Noises in the Auditory Cortex: An fNIRS Study

    Directory of Open Access Journals (Sweden)

    Hendrik eSantosa

    2014-12-01

    Full Text Available The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard. If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%, the entire subjects showed the right-hemispheric lateralization: This is due to the subjects’ effort to hear the music in the presence of noises. However, too much noise has reduced the subjects’ discerning efforts.

  4. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  5. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  6. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder.

    Science.gov (United States)

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-26

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD.

  7. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder

    Science.gov (United States)

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  8. Interhemispheric connectivity influences the degree of modulation of TMS-induced effects during auditory processing

    Directory of Open Access Journals (Sweden)

    Jamila eAndoh

    2011-07-01

    Full Text Available Repetitive TMS (rTMS has been shown to interfere with many components of language processing, including semantic, syntactic and phonologic. However, not much is known about its effects on primary auditory processing, especially its action on Heschl’s gyrus (HG. We aimed to investigate the behavioural and neural basis of rTMS during a melody processing task, while targeting the left HG, the right HG and the Vertex as a control site. Response Times (RT were normalized relative to the baseline-rTMS (Vertex and expressed as percentage change from baseline (%RT change. We also looked at sex differences in rTMS-induced response as well as in functional connectivity during melody processing using rTMS and functional Magnetic Resonance Imaging (fMRI.Functional MRI results showed an increase in the right HG compared with the left HG during the melody task, as well as sex differences in functional connectivity indicating a greater interhemispheric connectivity between left and right HG in females compared with males. TMS results showed that 10Hz-rTMS targeting the right HG induced differential effects according to sex, with a facilitation of performance in females and an impairment of performance in males. We also found a differential correlation between the %RT change after 10Hz-rTMS targeting the right HG and the interhemispheric functional connectivity between right and left HG, indicating that an increase in interhemispheric functional connectivity was associated with a facilitation of performance. This is the first study to report a differential rTMS-induced interference with melody processing depending on sex. In addition, we showed a relationship between the interference induced by rTMS on behavioral performance and the neural activity in the network connecting left and right HG, suggesting that the interhemispheric functional connectivity could determine the degree of modulation of behavioral performance.

  9. Interhemispheric Connectivity Influences the Degree of Modulation of TMS-Induced Effects during Auditory Processing.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2011-01-01

    Repetitive transcranial magnetic stimulation (rTMS) has been shown to interfere with many components of language processing, including semantic, syntactic, and phonologic. However, not much is known about its effects on nonlinguistic auditory processing, especially its action on Heschl's gyrus (HG). We aimed to investigate the behavioral and neural basis of rTMS during a melody processing task, while targeting the left HG, the right HG, and the Vertex as a control site. Response times (RT) were normalized relative to the baseline-rTMS (Vertex) and expressed as percentage change from baseline (%RT change). We also looked at sex differences in rTMS-induced response as well as in functional connectivity during melody processing using rTMS and functional magnetic resonance imaging (fMRI). fMRI results showed an increase in the right HG compared with the left HG during the melody task, as well as sex differences in functional connectivity indicating a greater interhemispheric connectivity between left and right HG in females compared with males. TMS results showed that 10 Hz-rTMS targeting the right HG induced differential effects according to sex, with a facilitation of performance in females and an impairment of performance in males. We also found a differential correlation between the %RT change after 10 Hz-rTMS targeting the right HG and the interhemispheric functional connectivity between right and left HG, indicating that an increase in interhemispheric functional connectivity was associated with a facilitation of performance. This is the first study to report a differential rTMS-induced interference with melody processing depending on sex. In addition, we showed a relationship between the interference induced by rTMS on behavioral performance and the neural activity in the network connecting left and right HG, suggesting that the interhemispheric functional connectivity could determine the degree of modulation of behavioral performance.

  10. Acoustic processing of temporally modulated sounds in infants: evidence from a combined near-infrared spectroscopy and EEG study

    Directory of Open Access Journals (Sweden)

    Silke eTelkemeyer

    2011-04-01

    Full Text Available Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG and near-infrared spectroscopy (NIRS. NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language

  11. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  12. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  13. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  14. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  15. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  16. Avaliação do processamento auditivo na Neurofibromatose tipo 1 Auditory processing evaluation in Neurofibromatosis type 1

    Directory of Open Access Journals (Sweden)

    Pollyanna Barros Batista

    2010-12-01

    Full Text Available Este trabalho teve como objetivo apresentar os resultados obtidos na avaliação do processamento auditivo de um paciente com Neurofibromatose tipo 1. Embora a audição periférica estivesse normal nos testes realizados, foram observadas alterações importantes no processamento auditivo em várias habilidades. Este achado, descrito pela primeira vez na neurofibromatose, pode contribuir para explicar os distúrbios cognitivos e da aprendizagem já amplamente descritos nesta enfermidade genética comum.The aim of this study was to present the results obtained in the auditory processing evaluation of a patient with neurofibromatosis type 1. Although the patient presented normal peripheral hearing, auditory processing deficits were identified in several abilities. This finding, described for the first time in neurofibromatosis, might help to explain the cognitive and learning disabilities broadly described for this common genetic disorder.

  17. A corollary discharge mechanism modulates central auditory processing in singing crickets.

    Science.gov (United States)

    Poulet, J F A; Hedwig, B

    2003-03-01

    Crickets communicate using loud (100 dB SPL) sound signals that could adversely affect their own auditory system. To examine how they cope with this self-generated acoustic stimulation, intracellular recordings were made from auditory afferent neurons and an identified auditory interneuron-the Omega 1 neuron (ON1)-during pharmacologically elicited singing (stridulation). During sonorous stridulation, the auditory afferents and ON1 responded with bursts of spikes to the crickets' own song. When the crickets were stridulating silently, after one wing had been removed, only a few spikes were recorded in the afferents and ON1. Primary afferent depolarizations (PADs) occurred in the terminals of the auditory afferents, and inhibitory postsynaptic potentials (IPSPs) were apparent in ON1. The PADs and IPSPs were composed of many summed, small-amplitude potentials that occurred at a rate of about 230 Hz. The PADs and the IPSPs started during the closing wing movement and peaked in amplitude during the subsequent opening wing movement. As a consequence, during silent stridulation, ON1's response to acoustic stimuli was maximally inhibited during wing opening. Inhibition coincides with the time when ON1 would otherwise be most strongly excited by self-generated sounds in a sonorously stridulating cricket. The PADs and the IPSPs persisted in fictively stridulating crickets whose ventral nerve cord had been isolated from muscles and sense organs. This strongly suggests that the inhibition of the auditory pathway is the result of a corollary discharge from the stridulation motor network. The central inhibition was mimicked by hyperpolarizing current injection into ON1 while it was responding to a 100 dB SPL sound pulse. This suppressed its spiking response to the acoustic stimulus and maintained its response to subsequent, quieter stimuli. The corollary discharge therefore prevents auditory desensitization in stridulating crickets and allows the animals to respond to external

  18. The mitochondrial connection in auditory neuropathy.

    Science.gov (United States)

    Cacace, Anthony T; Pinheiro, Joaquim M B

    2011-01-01

    'Auditory neuropathy' (AN), the term used to codify a primary degeneration of the auditory nerve, can be linked directly or indirectly to mitochondrial dysfunction. These observations are based on the expression of AN in known mitochondrial-based neurological diseases (Friedreich's ataxia, Mohr-Tranebjærg syndrome), in conditions where defects in axonal transport, protein trafficking, and fusion processes perturb and/or disrupt mitochondrial dynamics (Charcot-Marie-Tooth disease, autosomal dominant optic atrophy), in a common neonatal condition known to be toxic to mitochondria (hyperbilirubinemia), and where respiratory chain deficiencies produce reductions in oxidative phosphorylation that adversely affect peripheral auditory mechanisms. This body of evidence is solidified by data derived from temporal bone and genetic studies, biochemical, molecular biologic, behavioral, electroacoustic, and electrophysiological investigations.

  19. Participação do cerebelo no processamento auditivo Participation of the cerebellum in auditory processing

    Directory of Open Access Journals (Sweden)

    Patrícia Maria Sens

    2007-04-01

    Full Text Available O cerebelo era tradicionalmente visto como um órgão coordenador da motricidade, entretanto é atualmente considerado como um importante centro de integração de sensibilidades e coordenação de várias fases do processo cognitivo. OBJETIVO: é sistematizar as informações da literatura quanto à participação do cerebelo na percepção auditiva. MÉTODOS: foram selecionados na literatura trabalhos em animais sobre a fisiologia e anatomia das vias auditivas do cerebelo, além de trabalhos em humanos sobre diversas funções do cerebelo na percepção auditiva. Foram discutidos os achados da literatura, que há evidências que o cerebelo participa das seguintes funções cognitivas relacionadas à audição: geração verbal; processamento auditivo; atenção auditiva; memória auditiva; raciocínio abstrato; timing; solução de problemas; discriminação sensorial; informação sensorial; processamento da linguagem; operações lingüísticas. CONCLUSÃO: Foi constatado que são incompletas as informações sobre as estruturas, funções e vias auditivas do cerebelo.The cerebellum, traditionally conceived as a controlling organ of motricity, it is today considered an all-important integration center for both sensitivity and coordination of the various phases of the cognitive process. AIM: This paper aims at gather and sort literature information on the cerebellum’s role in the auditory perception. METHODS: We have selected animal studies of both the physiology and the anatomy of the cerebellum auditory pathway, as well as papers on humans discussing several functions of the cerebellum in auditory perception. As for the literature, it has been discussed and concluded that there is evidence that the cerebellum participates in many cognitive functions related to hearing: speech generation, auditory processing, auditory memory, abstract reasoning, timing, solution of problems, sensorial discrimination, sensorial information, language

  20. Modality specific neural correlates of auditory and somatic hallucinations

    Science.gov (United States)

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  1. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  2. Temporal dynamics of biogeochemical processes at the Norman Landfill site

    Science.gov (United States)

    Arora, Bhavna; Mohanty, Binayak P.; McGuire, Jennifer T.; Cozzarelli, Isabelle M.

    2013-01-01

    The temporal variability observed in redox sensitive species in groundwater can be attributed to coupled hydrological, geochemical, and microbial processes. These controlling processes are typically nonstationary, and distributed across various time scales. Therefore, the purpose of this study is to investigate biogeochemical data sets from a municipal landfill site to identify the dominant modes of variation and determine the physical controls that become significant at different time scales. Data on hydraulic head, specific conductance, δ2H, chloride, sulfate, nitrate, and nonvolatile dissolved organic carbon were collected between 1998 and 2000 at three wells at the Norman Landfill site in Norman, OK. Wavelet analysis on this geochemical data set indicates that variations in concentrations of reactive and conservative solutes are strongly coupled to hydrologic variability (water table elevation and precipitation) at 8 month scales, and to individual eco-hydrogeologic framework (such as seasonality of vegetation, surface-groundwater dynamics) at 16 month scales. Apart from hydrologic variations, temporal variability in sulfate concentrations can be associated with different sources (FeS cycling, recharge events) and sinks (uptake by vegetation) depending on the well location and proximity to the leachate plume. Results suggest that nitrate concentrations show multiscale behavior across temporal scales for different well locations, and dominant variability in dissolved organic carbon for a closed municipal landfill can be larger than 2 years due to its decomposition and changing content. A conceptual framework that explains the variability in chemical concentrations at different time scales as a function of hydrologic processes, site-specific interactions, and/or coupled biogeochemical effects is also presented.

  3. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    Science.gov (United States)

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  4. Temporal information processing in cones: effects of light adaptation on temporal summation and modulation.

    Science.gov (United States)

    Daly, S J; Normann, R A

    1985-01-01

    We have studied the temporal information processing of turtle cones in steady states of light adaptation using intracellular recording techniques. We measured the linear range incremental sensitivity of cones as a function of the stimulus duration. Linear range incremental sensitivity is a function of the background intensity. It is also proportional to the duration of short duration stimuli but is independent of duration for long duration stimuli. The plot of log sensitivity versus log stimulus duration displays two straight line asymptotes; a slope of one for short durations and a slope of zero for long durations. These asymptotes intersect at a time, the critical duration, which decreases with increasing background intensity. Linear systems theory was used to predict these results in addition to the interdependence of critical duration, response kinetics, and sensitivity for any state of adaptation. We have also calculated cone sensitivity as a function of sinusoidal frequency for a variety of background intensities. Correlations between these results and psychophysical studies suggest that the limits on temporal summation established by the cones appear not to be substantially altered by the rest of the retina.

  5. Characterizing auditory processing and perception in individual listeners with sensorineural hearing loss

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    2011-01-01

    –438 (2008)] was used as a framework. The parameters of the cochlear processing stage of the model were adjusted to account for behaviorally estimated individual basilar-membrane inputoutput functions and the audiogram, from which the amounts of inner hair-cell and outer hair-cell losses were estimated......This study considered consequences of sensorineural hearing loss in ten listeners. The characterization of individual hearing loss was based on psychoacoustic data addressing audiometric pure-tone sensitivity, cochlear compression, frequency selectivity, temporal resolution, and intensity...

  6. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  7. The practices, challenges and recommendations of South African audiologists regarding managing children with auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Claire Fouché-Copley

    2016-02-01

    Full Text Available Audiologists managing children with auditory processing disorders (APD encounter challenges that include conflicting definitions, several classification profiles, problems with differential diagnosis and a lack of standardised guidelines. The heterogeneity of the disorder and its concomitant childhood disorders makes diagnosis difficult. Linguistic and cultural issues are additional challenges faced by South African audiologists. The study aimed to describe the practices, challenges and recommendations of South African audiologists managing children with APD. A quantitative, non-experimental descriptive survey was used to obtain data from 156 audiologists registered with the Health Professions of South Africa. Findings revealed that 67% screened for APD, 42% assessed while 43% provided intervention. A variety of screening and assessment procedures were being administered, with no standard test battery identified. A range of intervention strategies being used are discussed. When the relationship between the number of years of experience and the audiologists’ level of preparedness to practice in the field of APD was compared, a statistically significant difference (p = 0.049 was seen in that participants with more than 10 years of experience were more prepared to practice in this area. Those participants having qualified as speech-language therapists and audiologists were significantly more prepared (p = 0.03 to practice than the audiologists who comprised the sample. Challenges experienced by the participants included the lack of linguistically and culturally appropriate screening and assessment tools and limited normative data. Recommendations included reviewing the undergraduate audiology training programmes, reinstituting the South African APD Taskforce, developing linguistically and culturally appropriate normative data, creating awareness among educators and involving them in the multidisciplinary team.Keywords: Screening; assessment

  8. Neuromagnetic mismatch field (MMF) dependence on the auditory temporal integration window and the existence of categorical boundaries: comparisons between dissyllabic words and their equivalent tones.

    Science.gov (United States)

    Inouchi, Mayako; Kubota, Mikio; Ohta, Katsuya; Matsushima, Eisuke; Ferrari, Paul; Scovel, Thomas

    2008-09-26

    Previous duration-related auditory mismatch response studies have tested vowels, words, and tones. Recently, the elicitation of strong neuromagnetic mismatch field (MMF) components in response to large (>32%) vowel-duration decrements was clearly observed within dissyllabic words. To date, however, the issues of whether this MMF duration-decrement effect also extends to duration increments, and to what degree these duration decrements and increments are attributed to their corresponding non-speech acoustic properties remainto be resolved. Accordingly, this magnetoencephalographic (MEG) study investigated whether prominent MMF components would be evoked by both duration decrements and increments for dissyllabic word stimuli as well as frequency-band matched tones in order to corroborate the relation between the MMF elicitation and the directions of duration changes in speech and non-speech. Further, the peak latency effectsdepending on stimulus types (words vs. tones) were examined. MEG responses were recorded with a whole-head 148-channel magnetometer, while subjects passively listened to the stimuli presented within an odd-ball paradigm for both shortened duration (180-->100%) and lengthened duration (100-->180%). Prominent MMF components were observed in the shortened and lengthened paradigms for the word stimuli, but only in the shortened paradigm for tones. The MMF peak latency results showed that the words ledtoearlier peak latencies than the tones. These findings suggest that duration lengthening as well as shortening in words produces a salient acoustic MMF response when the divergent point between the long and short durations fallswithin the temporal window ofauditory integration post sound onset (<200 ms), and that theearlier latency of the dissyllabic word stimuli over tones is due to a prominent syllable structure in words which is used to generate temporal categorical boundaries.

  9. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  10. Neural processing of auditory signals and modular neural control for sound tropism of walking machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Pasemann, Frank; Fischer, Joern;

    2005-01-01

    The specialized hairs and slit sensillae of spiders (Cupiennius salei) can sense the airflow and auditory signals in a low-frequency range. They provide the sensor information for reactive behavior, like e.g. capturing a prey. In analogy, in this paper a setup is described where two microphones a...

  11. Subjective Loudness and Reality of Auditory Verbal Hallucinations and Activation of the Inner Speech Processing Network

    NARCIS (Netherlands)

    Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, Andre

    2011-01-01

    Background: One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can explai

  12. Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts

    Science.gov (United States)

    2012-07-01

    this research at the (former) WRAMC. Drs. Frank Musiek and Richard Wilson generously provided essential testing materials. Dr. David Lilly...wnl.0000230197.40410.db 18. Humes LE, Coughlin M, Talley L. Evaluation of the use of a new compact disc for auditory perceptual assessment in the

  13. Auditory Processing and Language Impairment in Children: Stimulus Considerations for Intervention.

    Science.gov (United States)

    Thal, Donna J.; Barone, Patricia

    1983-01-01

    The performance of language impaired children (four to eight years old) on auditory identification and sequencing tasks which employed different stimuli was studied in two experiments. Results indicated that some children performed significantly better when words rather than tones were used as stimuli.(Author/SEW)

  14. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  15. Great Expectations: Temporal Expectation Modulates Perceptual Processing Speed

    Science.gov (United States)

    Vangkilde, Signe; Coull, Jennifer T.; Bundesen, Claus

    2012-01-01

    In a crowded dynamic world, temporal expectations guide our attention in time. Prior investigations have consistently demonstrated that temporal expectations speed motor behavior. We explore effects of temporal expectation on "perceptual" speed in three nonspeeded, cued recognition paradigms. Different hazard rate functions for the cue-stimulus…

  16. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    Full Text Available Multi-electrode array recordings of spike and local field potential (LFP activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs and 492 frequency-tuning curves (FTCs based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm and the 16-40 Hz LFP (7.4 mm, whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz LFP-pair correlations showed that about 16% (9% of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  17. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Science.gov (United States)

    Eggermont, Jos J; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm) and the 16-40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  18. The Temporal Dynamics of Scene Processing: A Multifaceted EEG Investigation

    Science.gov (United States)

    Kravitz, Dwight J.

    2016-01-01

    Abstract Our remarkable ability to process complex visual scenes is supported by a network of scene-selective cortical regions. Despite growing knowledge about the scene representation in these regions, much less is known about the temporal dynamics with which these representations emerge. We conducted two experiments aimed at identifying and characterizing the earliest markers of scene-specific processing. In the first experiment, human participants viewed images of scenes, faces, and everyday objects while event-related potentials (ERPs) were recorded. We found that the first ERP component to evince a significantly stronger response to scenes than the other categories was the P2, peaking ∼220 ms after stimulus onset. To establish that the P2 component reflects scene-specific processing, in the second experiment, we recorded ERPs while the participants viewed diverse real-world scenes spanning the following three global scene properties: spatial expanse (open/closed), relative distance (near/far), and naturalness (man-made/natural). We found that P2 amplitude was sensitive to these scene properties at both the categorical level, distinguishing between open and closed natural scenes, as well as at the single-image level, reflecting both computationally derived scene statistics and behavioral ratings of naturalness and spatial expanse. Together, these results establish the P2 as an ERP marker for scene processing, and demonstrate that scene-specific global information is available in the neural response as early as 220 ms. PMID:27699208

  19. Behavioral Measures of Monaural Temporal Fine Structure Processing

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    125:3412-3422, 2009). However, spectral cues arising from detectable excitation pattern shifts or audible combination tones might supplement TFS cues in this H/I-discrimination task. The present study further assessed the importance of the role of TFS, in contrast to that of temporal envelope...... characterizing hearing impairment. Estimating the acuity of monaural TFS processing in humans however remains a challenge. One suggested measure is based on the ability of listeners to detect a pitch shift between harmonic (H) and inharmonic (I) complex tones with unresolved components (e.g. Moore et al., JASA...... and spectral resolution, for the low pitch evoked by high-frequency complex tones. The aim was to estimate the efficiency of monaural TFS cues as a function of the stimulus center frequency Fc and its ratio N to the stimulus envelope repetition rate. A pitch-matching paradigm was used, such that changes...

  20. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  1. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  2. Spectro-temporal processing of speech – An information-theoretic framework

    DEFF Research Database (Denmark)

    Christiansen, Thomas Ulrich; Dau, Torsten; Greenberg, Steven

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up-to-date...

  3. Is It Necessary to Do Temporal Bone Computed Tomography of the Internal Auditory Canal in Tinnitus with Normal Hearing?

    Directory of Open Access Journals (Sweden)

    Tolgar Lutfi Kumral

    2013-01-01

    Full Text Available Objective. To investigate the compression of the vestibulocochlear nerve in the etiology of the tinnitus in the normal hearing ears with temporal bone computed tomography scans. Methods. A prospective nonrandomized study of 30 bilateral tinnitus and 30 normal hearing patients enrolled in this study. Results. A total of 60 patients (ages ranged from 16 to 87 were included. The tinnitus group comprised 11 males and 19 females (mean age 49,50 ± 12,008 and the control group comprised 6 males and 24 females (mean age 39,47 ± 12,544. Regarding the right and left internal acoustic canals measurements (inlet, midcanal, and outlet canal lengths, there were no significant differences between the measurements of the control and tinnitus groups (P>0.005. There was no narrowness in the internal acoustic canal of the tinnitus group compared with the control group. High-frequency audiometric measurements of the right and left ears tinnitus group at 8000, 9000, 10000, 11200, 12500, 14000, 16000, and 18000 Hz frequencies were significantly lower than the control group thresholds (P<0.05. There was high-frequency hearing loss in the tinnitus group. Conclusion. There were no anatomical differences in the etiology of tinnitus rather than physiological degeneration in the nerves.

  4. Differences in synaptic and intrinsic properties result in topographic heterogeneity of temporal processing of neurons within the inferior colliculus.

    Science.gov (United States)

    Yassin, Lina; Pecka, Michael; Kajopoulos, Jasmin; Gleiss, Helge; Li, Lu; Leibold, Christian; Felmy, Felix

    2016-11-01

    The identification and characterization of organization principals is essential for the understanding of neural function of brain areas. The inferior colliculus (IC) represents a midbrain nexus involved in numerous aspects of auditory processing. Likewise, neurons throughout the IC are tuned to a diverse range of specific stimulus features. Yet beyond a topographic arrangement of the cochlea-inherited frequency tuning, the functional organization of the IC is not well understood. Particularly, a common principle that links the diverse tuning characteristics is unknown. Here we used in vitro patch clamp recordings combined with laser-uncaging, and in vivo single cell recordings to study the spatial and functional organization principles of the central IC. We identified a topographic bias of ascending synaptic input timing that is balanced between inhibition and excitation and co-varies with in vivo first-spike latency. This bias was paralleled post-synaptically by differences in biophysical membrane properties and firing patterns, with integrating neurons predominantly found in the dorso-medial part, and coincidence-detector neurons biased to the ventro-lateral IC. Importantly, these cellular and network features translated into distinct temporal processing capabilities irrespectively of the neurons' characteristic frequency. Our data therefore imply that heterogeneity of synaptic inputs, intrinsic properties and temporal processing are functional principles that underlie the spatial organization of the central IC.

  5. Semantic Processing Impairment in Patients with Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Amanda G. Jaimes-Bautista

    2015-01-01

    Full Text Available The impairment in episodic memory system is the best-known cognitive deficit in patients with temporal lobe epilepsy (TLE. Recent studies have shown evidence of semantic disorders, but they have been less studied than episodic memory. The semantic dysfunction in TLE has various cognitive manifestations, such as the presence of language disorders characterized by defects in naming, verbal fluency, or remote semantic information retrieval, which affects the ability of patients to interact with their surroundings. This paper is a review of recent research about the consequences of TLE on semantic processing, considering neuropsychological, electrophysiological, and neuroimaging findings, as well as the functional role of the hippocampus in semantic processing. The evidence from these studies shows disturbance of semantic memory in patients with TLE and supports the theory of declarative memory of the hippocampus. Functional neuroimaging studies show an inefficient compensatory functional reorganization of semantic networks and electrophysiological studies show a lack of N400 effect that could indicate that the deficit in semantic processing in patients with TLE could be due to a failure in the mechanisms of automatic access to lexicon.

  6. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  7. DISTINCT TEMPORALITIES IN THE BREAST CANCER DISEASE PROCESS

    Directory of Open Access Journals (Sweden)

    Janderléia Valéria Dolina

    2014-12-01

    Full Text Available Comprehensive approach study aimed understanding the reflections and contrasts between personal time and medical therapy protocol time in the life of a young woman with breast cancer. Addressed as a situational study and grounded in Beth’s life story about getting sick and dying of cancer at age 34, the study’s data collection process employed interviews, observation and medical record analysis. The construction of the analytic-synthetic box based on the chronology of Beth’s clinical progression, treatment phases and temporal perception of occurrences enabled us to point out a linear medical therapy protocol time identified by the diagnosis and treatment sequencing process. On the other hand, Beth’s experienced time was marked by simultaneous and non-linear events that generated suffering resulting from the disease. Such comprehension highlights the need for healthcare professionals to take into account the time experienced by the patient, thus providing an indispensable cancer therapeutic protocol with a personal character.

  8. The temporal characteristics of Ca2+ entry through L-type and T-type Ca2+ channels shape exocytosis efficiency in chick auditory hair cells during development.

    Science.gov (United States)

    Levic, Snezana; Dulon, Didier

    2012-12-01

    During development, synaptic exocytosis by cochlear hair cells is first initiated by patterned spontaneous Ca(2+) spikes and, at the onset of hearing, by sound-driven graded depolarizing potentials. The molecular reorganization occurring in the hair cell synaptic machinery during this developmental transition still remains elusive. We characterized the changes in biophysical properties of voltage-gated Ca(2+) currents and exocytosis in developing auditory hair cells of a precocial animal, the domestic chick. We found that immature chick hair cells (embryonic days 10-12) use two types of Ca(2+) currents to control exocytosis: low-voltage-activating, rapidly inactivating (mibefradil sensitive) T-type Ca(2+) currents and high-voltage-activating, noninactivating (nifedipine sensitive) L-type currents. Exocytosis evoked by T-type Ca(2+) current displayed a fast release component (RRP) but lacked the slow sustained release component (SRP), suggesting an inefficient recruitment of distant synaptic vesicles by this transient Ca(2+) current. With maturation, the participation of L-type Ca(2+) currents to exocytosis largely increased, inducing a highly Ca(2+) efficient recruitment of an RRP and an SRP component. Notably, L-type-driven exocytosis in immature hair cells displayed higher Ca(2+) efficiency when triggered by prerecorded native action potentials than by voltage steps, whereas similar efficiency for both protocols was found in mature hair cells. This difference likely reflects a tighter coupling between release sites and Ca(2+) channels in mature hair cells. Overall, our results suggest that the temporal characteristics of Ca(2+) entry through T-type and L-type Ca(2+) channels greatly influence synaptic release by hair cells during cochlear development.

  9. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson's Disease.

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson's disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1-3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion-extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  10. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    by comparing predictions from models based on the signal-to-noise envelope power ratio, SNRenv, and the modulation transfer function, MTF. The models were evaluated in conditions of noisy speech (1) subjected to reverberation, (2) distorted by phase jitter, or (3) processed by noise reduction via spectral...

  11. Elliptic Bessel processes and elliptic Dyson models realized as temporally inhomogeneous processes

    Science.gov (United States)

    Katori, Makoto

    2016-10-01

    The Bessel process with parameter D > 1 and the Dyson model of interacting Brownian motions with coupling constant β > 0 are extended to the processes in which the drift term and the interaction terms are given by the logarithmic derivatives of Jacobi's theta functions. They are called the elliptic Bessel process, eBES(D), and the elliptic Dyson model, eDYS(β), respectively. Both are realized on the circumference of a circle [0, 2πr) with radius r > 0 as temporally inhomogeneous processes defined in a finite time interval [0, t∗), t∗ < ∞. Transformations of them to Schrödinger-type equations with time-dependent potentials lead us to proving that eBES(D) and eDYS(β) can be constructed as the time-dependent Girsanov transformations of Brownian motions. In the special cases where D = 3 and β = 2, observables of the processes are defined and the processes are represented for them using the Brownian paths winding round a circle and pinned at time t∗. We show that eDYS(2) has the determinantal martingale representation for any observable. Then it is proved that eDYS(2) is determinantal for all observables for any finite initial configuration without multiple points. Determinantal processes are stochastic integrable systems in the sense that all spatio-temporal correlation functions are given by determinants controlled by a single continuous function called the spatio-temporal correlation kernel.

  12. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  13. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  14. Detection, information fusion, and temporal processing for intelligence in recognition

    Energy Technology Data Exchange (ETDEWEB)

    Casasent, D. [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1996-12-31

    The use of intelligence in vision recognition uses many different techniques or tools. This presentation discusses several of these techniques for recognition. The recognition process is generally separated into several steps or stages when implemented in hardware, e.g. detection, segmentation and enhancement, and recognition. Several new distortion-invariant filters, biologically-inspired Gabor wavelet filter techniques, and morphological operations that have been found very useful for detection and clutter rejection are discussed. These are all shift-invariant operations that allow multiple object regions of interest in a scene to be located in parallel. We also discuss new algorithm fusion concepts by which the results from different detection algorithms are combined to reduce detection false alarms; these fusion methods utilize hierarchical processing and fuzzy logic concepts. We have found this to be most necessary, since no single detection algorithm is best for all cases. For the final recognition stage, we describe a new method of representing all distorted versions of different classes of objects and determining the object class and pose that most closely matches that of a given input. Besides being efficient in terms of storage and on-line computations required, it overcomes many of the problems that other classifiers have in terms of the required training set size, poor generalization with many hidden layer neurons, etc. It is also attractive in its ability to reject input regions as clutter (non-objects) and to learn new object descriptions. We also discuss its use in processing a temporal sequence of input images of the contents of each local region of interest. We note how this leads to robust results in which estimation efforts in individual frames can be overcome. This seems very practical, since in many scenarios a decision need not be made after only one frame of data, since subsequent frames of data enter immediately in sequence.

  15. Sex-related differences in auditory processing in adolescents with fetal alcohol spectrum disorder: A magnetoencephalographic study

    Directory of Open Access Journals (Sweden)

    Claudia D. Tesche

    2015-01-01

    Full Text Available Children exposed to substantial amounts of alcohol in utero display a broad range of morphological and behavioral outcomes, which are collectively referred to as fetal alcohol spectrum disorders (FASDs. Common to all children on the spectrum are cognitive and behavioral problems that reflect central nervous system dysfunction. Little is known, however, about the potential effects of variables such as sex on alcohol-induced brain damage. The goal of the current research was to utilize magnetoencephalography (MEG to examine the effect of sex on brain dynamics in adolescents and young adults with FASD during the performance of an auditory oddball task. The stimuli were short trains of 1 kHz “standard” tone bursts (80% randomly interleaved with 1.5 kHz “target” tone bursts (10% and “novel” digital sounds (10%. Participants made motor responses to the target tones. Results are reported for 44 individuals (18 males and 26 females ages 12 through 22 years. Nine males and 13 females had a diagnosis of FASD and the remainder were typically-developing age- and sex-matched controls. The main finding was widespread sex-specific differential activation of the frontal, medial and temporal cortex in adolescents with FASD compared to typically developing controls. Significant differences in evoked-response and time–frequency measures of brain dynamics were observed for all stimulus types in the auditory cortex, inferior frontal sulcus and hippocampus. These results underscore the importance of considering the influence of sex when analyzing neurophysiological data in children with FASD.

  16. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  17. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  18. Temporal dynamics of reward processing revealed by magnetoencephalography.

    Science.gov (United States)

    Doñamayor, Nuria; Marco-Pallarés, Josep; Heldmann, Marcus; Schoenfeld, M Ariel; Münte, Thomas F

    2011-12-01

    Monetary gains and losses in gambling situations are associated with a distinct electroencephalographic signature: in the event-related potentials (ERPs), a mediofrontal feedback-related negativity (FRN) is seen for losses, whereas oscillatory activity shows a burst of in the θ-range for losses and in the β-range for gains. We used whole-head magnetoencephalography to pinpoint the magnetic counterparts of these effects in young healthy adults and explore their evolution over time. On each trial, participants bet on one of two visually presented numbers (25 or 5) by button-press. Both numbers changed color: if the chosen number turned green (red), it indicated a gain (loss) of the corresponding sum in Euro cent. For losses, we found the magnetic correlate of the FRN extending between 230 and 465 ms. Source localization with low-resolution electromagnetic tomography indicated a first generator in posterior cingulate cortex with subsequent activity in the anterior cingulate cortex. Importantly, this effect was sensitive to the magnitude of the monetary loss (25 cent > 5 cent). Later activation was also found in the right insula. Time-frequency analysis revealed a number of oscillatory components in the theta, alpha, and high-beta/low-gamma bands associated to gains, and in the high-beta band, associated to the magnitude of the loss. All together, these effects provide a more fine-grained picture of the temporal dynamics of the processing of monetary rewards and losses in the brain.

  19. Cerebral information processing in personality disorders: I. Intensity dependence of auditory evoked potentials.

    Science.gov (United States)

    Wang, Wei; Wang, Yehan; Fu, Xianming; Liu, Jianhui; He, Chengsen; Dong, Yi; Livesley, W John; Jang, Kerry L

    2006-02-28

    Patients with personality disorders such as the histrionic type exaggerate their responses when receiving external social or environmental stimuli. We speculated that they might also show an augmenting pattern of the auditory evoked potential N1-P2 component in response to stimuli with increasing levels of intensity, a response pattern that is thought to be inversely correlated with cerebral serotonin (5-HT) activity. To test this hypothesis, we collected auditory evoked potentials in 191 patients with personality disorders (19 patients with the paranoid type, 12 schizoid, 14 schizotypal, 18 antisocial, 15 borderline, 13 histrionic, 17 narcissistic, 25 avoidant, 30 dependent and 28 obsessive-compulsive) and 26 healthy volunteers. Their personality traits were measured using the Dimensional Assessment of Personality Pathology-Basic Questionnaire (DAPP-BQ). Compared with healthy subjects and other patient groups, the histrionic group scored higher on the basic traits Affective Instability, Stimulus Seeking, Rejection and Narcissism, and on the higher traits Emotional Dysregulation and Dissocial, than the other groups, and the schizoid group scored lower on most of the DAPP-BQ basic and higher traits. In addition, the histrionic group showed steeper amplitude/stimulus intensity function (ASF) slopes at three midline scalp electrodes than the healthy controls or the other patient groups. The ASF slopes were not correlated with any DAPP-BQ traits in the total sample of 217 subjects. However, the DAPP-BQ basic trait Rejection was positively correlated with the ASF slopes at all three electrode sites in the histrionic group. The increased intensity dependence of the auditory N1-P2 component might indicate that cerebral 5-HT neuronal activity is, on average, weak in the histrionic patients.

  20. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  1. Hemispheric Asymmetries for Temporal Information Processing: Transient Detection versus Sustained Monitoring

    Science.gov (United States)

    Okubo, Matia; Nicholls, Michael E. R.

    2008-01-01

    This study investigated functional differences in the processing of visual temporal information between the left and right hemispheres (LH and RH). Participants indicated whether or not a checkerboard pattern contained a temporal gap lasting between 10 and 40 ms. When the stimulus contained a temporal signal (i.e. a gap), responses were more…

  2. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  3. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  4. Lead exposure and the central auditory processing abilities and cognitive development of urban children: the Cincinnati Lead Study cohort at age 5 years

    Energy Technology Data Exchange (ETDEWEB)

    Dietrich, K.N.; Succop, P.A.; Berger, O.G.; Keith, R.W. (University of Cincinnati College of Medicine, Department of Environmental Health, OH (United States))

    1992-01-01

    This analysis examined the relationship between lead exposure as registered in whole blood (PbB) and the central auditory processing abilities and cognitive developmental status of the Cincinnati cohort (N = 259) at age 5 years. Although the effects were small, higher prenatal, neonatal, and postnatal PbB levels were associated with poorer central auditory processing abilities on the Filtered Word Subtest of the SCAN (a screening test for auditory processing disorders). Higher postnatal PbB levels were associated with poorer performance on all cognitive developmental subscales of the Kaufman Assessment Battery for Children (K-ABC). However, following adjustment for measures of the home environment and maternal intelligence, few statistically or near statistically significant associations remained. Our findings are discussed in the context of the related issues of confounding and the detection of weak associations in high risk populations.

  5. Processing of species-specific auditory patterns in the cricket brain by ascending, local, and descending neurons during standing and walking.

    Science.gov (United States)

    Zorović, M; Hedwig, B

    2011-05-01

    The recognition of the male calling song is essential for phonotaxis in female crickets. We investigated the responses toward different models of song patterns by ascending, local, and descending neurons in the brain of standing and walking crickets. We describe results for two ascending, three local, and two descending interneurons. Characteristic dendritic and axonal arborizations of the local and descending neurons indicate a flow of auditory information from the ascending interneurons toward the lateral accessory lobes and point toward the relevance of this brain region for cricket phonotaxis. Two aspects of auditory processing were studied: the tuning of interneuron activity to pulse repetition rate and the precision of pattern copying. Whereas ascending neurons exhibited weak, low-pass properties, local neurons showed both low- and band-pass properties, and descending neurons represented clear band-pass filters. Accurate copying of single pulses was found at all three levels of the auditory pathway. Animals were walking on a trackball, which allowed an assessment of the effect that walking has on auditory processing. During walking, all neurons were additionally activated, and in most neurons, the spike rate was correlated to walking velocity. The number of spikes elicited by a chirp increased with walking only in ascending neurons, whereas the peak instantaneous spike rate of the auditory responses increased on all levels of the processing pathway. Extra spiking activity resulted in a somewhat degraded copying of the pulse pattern in most neurons.

  6. Audience preferences are predicted by temporal reliability of neural processing.

    Science.gov (United States)

    Dmochowski, Jacek P; Bezdek, Matthew A; Abelson, Brian P; Johnson, John S; Schumacher, Eric H; Parra, Lucas C

    2014-07-29

    Naturalistic stimuli evoke highly reliable brain activity across viewers. Here we record neural activity from a group of naive individuals while viewing popular, previously-broadcast television content for which the broad audience response is characterized by social media activity and audience ratings. We find that the level of inter-subject correlation in the evoked encephalographic responses predicts the expressions of interest and preference among thousands. Surprisingly, ratings of the larger audience are predicted with greater accuracy than those of the individuals from whom the neural data is obtained. An additional functional magnetic resonance imaging study employing a separate sample of subjects shows that the level of neural reliability evoked by these stimuli covaries with the amount of blood-oxygenation-level-dependent (BOLD) activation in higher-order visual and auditory regions. Our findings suggest that stimuli which we judge favourably may be those to which our brains respond in a stereotypical manner shared by our peers.

  7. Treinamento auditivo para transtorno do processamento auditivo: uma proposta de intervenção terapêutica Auditory training for auditory processing disorder: a proposal for therapeutic intervention

    Directory of Open Access Journals (Sweden)

    Alessandra Giannella Samelli

    2010-04-01

    Full Text Available OBJETIVO: verificar a eficácia de um programa informal de treinamento auditivo específico para transtornos do Processamento Auditivo, em um grupo de pacientes com esta alteração, por meio da comparação de pré e pós-testes. MÉTODOS: participaram deste estudo 10 indivíduos de ambos os sexos, da faixa etária entre sete e 20 anos. Todos realizaram avaliação audiológica completa e do processamento auditivo (testes: Fala com Ruído, Sttagered Spondaic Word - SSW, Dicótico de Dígitos, Padrão de Frequência. Após 10 sessões individuais de treinamento auditivo, nas quais foram trabalhadas diretamente as habilidades auditivas alteradas, a avaliação do processamento auditivo foi refeita. RESULTADOS: as porcentagens médias de acertos nas situações pré e pós-treinamento auditivo demonstraram diferenças estatisticamente significantes em todos os testes realizados. CONCLUSÃO: o programa de treinamento auditivo informal empregado mostrou-se eficaz em um grupo de pacientes com transtorno do processamento auditivo, uma vez que determinou diferença estatisticamente significante entre o desempenho pré e pós-testes na avaliação do processamento auditivo, indicando melhora das habilidades auditivas alteradas.PURPOSE: to check the auditory training efficacy in patients with (central auditory processing disorder, by comparing pre and post results. METHODS: ten male and female subjects, from 7 to 20-year old, took part in this study. All participants were submitted to audiological and (central auditory processing evaluations, which included Speech Recognition under in Noise, Staggered Spondaic Word, Dichotic Digits and Frequency Pattern Discrimination tests. Evaluation was carried out after 10 auditory training sessions. RESULTS: statistical differences were verified comparing pre and post results concerning the mean percentage for all tests. CONCLUSION: the informal auditory training program used showed to be efficient for patients with

  8. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  9. Mismatch negativity (MMN) and sensory auditory processing in children aged 9-12 years presenting with putative antecedents of schizophrenia.

    Science.gov (United States)

    Bruggemann, Jason M; Stockill, Helen V; Lenroot, Rhoshel K; Laurens, Kristin R

    2013-09-01

    Identification of markers of abnormal brain function in children at-risk of schizophrenia may inform early intervention and prevention programs. Individuals with schizophrenia are characterised by attenuation of MMN amplitude, which indexes automatic auditory sensory processing. The current aim was to examine whether children who may be at increased risk of schizophrenia due to their presenting multiple putative antecedents of schizophrenia (ASz) are similarly characterised by MMN amplitude reductions, relative to typically developing (TD) children. EEG was recorded from 22 ASz and 24 TD children aged 9 to 12 years (matched on age, sex, and IQ) during a passive auditory oddball task (15% duration deviant). ASz children were those presenting: (1) speech and/or motor development lags/problems; (2) social, emotional, or behavioural problems in the clinical range; and (3) psychotic-like experiences. TD children presented no antecedents, and had no family history of a schizophrenia spectrum disorder. MMN amplitude, but not latency, was significantly greater at frontal sites in the ASz group than in the TD group. Although the MMN exhibited by the children at risk of schizophrenia was unlike that of their typically developing peers, it also differed from the reduced MMN amplitude observed in adults with schizophrenia. This may reflect developmental and disease effects in a pre-prodromal phase of psychosis onset. Longitudinal follow-up is necessary to establish the developmental trajectory of MMN in at-risk children.

  10. Spatio-temporal statistical models with applications to atmospheric processes

    Energy Technology Data Exchange (ETDEWEB)

    Wikle, C.K.

    1996-12-31

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  11. Neural Processing of Auditory Signals and Modular Neural Control for Sound Tropism of Walking Machines

    Directory of Open Access Journals (Sweden)

    Hubert Roth

    2008-11-01

    Full Text Available The specialized hairs and slit sensillae of spiders (Cupiennius salei can sense the airflow and auditory signals in a low-frequency range. They provide the sensor information for reactive behavior, like e.g. capturing a prey. In analogy, in this paper a setup is described where two microphones and a neural preprocessing system together with a modular neural controller are used to generate a sound tropism of a four-legged walking machine. The neural preprocessing network is acting as a low-pass filter and it is followed by a network which discerns between signals coming from the left or the right. The parameters of these networks are optimized by an evolutionary algorithm. In addition, a simple modular neural controller then generates the desired different walking patterns such that the machine walks straight, then turns towards a switched-on sound source, and then stops near to it.

  12. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues.

    Science.gov (United States)

    Zheng, Y; Escabí, M A

    2013-08-01

    Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues.

  13. Relação entre potenciais evocados auditivos de média latência e distúrbio de processamento auditivo: estudo de casos Relationship between auditory evoked potentials and middle latency auditory processing disorder: cases study

    Directory of Open Access Journals (Sweden)

    Ana Carla Leite Romero

    2013-01-01

    . This study aimed to analyze the auditory evoked middle latency response in two patients with auditory processing disorder and relate objective and behavioral measures. This case study was conducted in 2 patients (P1 = 12 years, female, P2 = 17 years old, male, both with the absence of sensory abnormalities, neurological and neuropsychiatric disorders. Both were submitted to anamnesis, inspection of the external ear canal, hearing test and evaluation of Auditory Evoked Middle latency Response. There was a significant association between behavioral test and objectives results. In the interview, there were complaints about the difficulty in listening in a noisy environment, sound localization, inattention, and phonological changes in writing and speaking, as confirmed by evaluation of auditory processing and Auditory Evoked Middle Latency Response. Changes were observed in the right decoding process hearing in both cases on the behavioral assessment of auditory processing; auditory evoked potential test middle latency shows that the right contralateral via response was deficient, confirming the difficulties of the patients in the assignment of meaning in acoustic information in a competitive sound condition at right, in both cases. In these cases it was shown the association between the results, but there is a need for further studies with larger sample population to confirm the data.

  14. Relação entre potenciais evocados auditivos de média latência e distúrbio de processamento auditivo: estudo de casos Relationship between auditory evoked potentials and middle latency auditory processing disorder: cases study

    Directory of Open Access Journals (Sweden)

    Ana Carla Leite Romero

    2013-04-01

    . This study aimed to analyze the auditory evoked middle latency response in two patients with auditory processing disorder and relate objective and behavioral measures. This case study was conducted in 2 patients (P1 = 12 years, female, P2 = 17 years old, male, both with the absence of sensory abnormalities, neurological and neuropsychiatric disorders. Both were submitted to anamnesis, inspection of the external ear canal, hearing test and evaluation of Auditory Evoked Middle latency Response. There was a significant association between behavioral test and objectives results. In the interview, there were complaints about the difficulty in listening in a noisy environment, sound localization, inattention, and phonological changes in writing and speaking, as confirmed by evaluation of auditory processing and Auditory Evoked Middle Latency Response. Changes were observed in the right decoding process hearing in both cases on the behavioral assessment of auditory processing; auditory evoked potential test middle latency shows that the right contralateral via response was deficient, confirming the difficulties of the patients in the assignment of meaning in acoustic information in a competitive sound condition at right, in both cases. In these cases it was shown the association between the results, but there is a need for further studies with larger sample population to confirm the data.

  15. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  16. Processing of natural temporal stimuli by macaque retinal ganglion cells

    NARCIS (Netherlands)

    Hateren, J.H. van; Rüttiger, L.; Lee, B.B.

    2002-01-01

    This study quantifies the performance of primate retinal ganglion cells in response to natural stimuli. Stimuli were confined to the temporal and chromatic domains and were derived from two contrasting environments, one typically northern European and the other a flower show. The performance of the

  17. Survey of Bayesian Models for Modelling of Stochastic Temporal Processes

    Energy Technology Data Exchange (ETDEWEB)

    Ng, B

    2006-10-12

    This survey gives an overview of popular generative models used in the modeling of stochastic temporal systems. In particular, this survey is organized into two parts. The first part discusses the discrete-time representations of dynamic Bayesian networks and dynamic relational probabilistic models, while the second part discusses the continuous-time representation of continuous-time Bayesian networks.

  18. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  19. Auditory sensory processing deficits in sensory gating and mismatch negativity-like responses in the social isolation rat model of schizophrenia

    DEFF Research Database (Denmark)

    Witten, Louise; Oranje, Bob; Mørk, Arne;

    2014-01-01

    Patients with schizophrenia exhibit disturbances in information processing. These disturbances can be investigated with different paradigms of auditory event related potentials (ERP), such as sensory gating in a double click paradigm (P50 suppression) and the mismatch negativity (MMN) component...... supports previous findings in SI rats and the reduced MMN-like response is similar to the deficits of MMN seen in patients with schizophrenia. Since reduced auditory MMN amplitude is believed to be more selectively associated with schizophrenia than other measures of sensory gating deficits, the current...... study supports the face validity of the SI reared rat model for schizophrenia....

  20. Linear Stimulus-Invariant Processing and Spectrotemporal Reverse Correlation in Primary Auditory Cortex

    Science.gov (United States)

    2003-01-01

    is zero ( Kvale et al., 1998; Klein et al., 2000); and third, additional computations may be undertaken to try and adjust for the correlations in the...Neurophysiology, 76:3524–3534. Kvale , M., Schreiner, C., and Bonham, B. (1998). Spectro-temporal and adaptive response to AM stimuli in the inferior colliculus

  1. Speech Perception Deficits in Poor Readers: Auditory Processing or Phonological Coding?

    Science.gov (United States)

    Mody, Maria; And Others

    1997-01-01

    Forty second-graders, 20 good and 20 poor readers, completed a /ba/-/da/ temporal order judgment (TOJ) task. The groups did not differ in TOJ when /ba/ and /da/ were paired with more easily discriminated syllables. Poor readers' difficulties with /ba/-/da/ reflected perceptual confusion between phonetically similar syllables rather than difficulty…

  2. The Effects of Aircraft Noise on the Auditory Language Processing Abilities of English First Language Primary School Learners in Durban, South Africa

    Science.gov (United States)

    Hollander, Cara; de Andrade, Victor Manuel

    2014-01-01

    Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…

  3. Understanding and Identifying the Child at Risk for Auditory Processing Disorders: A Case Method Approach in Examining the Interdisciplinary Role of the School Nurse

    Science.gov (United States)

    Neville, Kathleen; Foley, Marie; Gertner, Alan

    2011-01-01

    Despite receiving increased professional and public awareness since the initial American Speech Language Hearing Association (ASHA) statement defining Auditory Processing Disorders (APDs) in 1993 and the subsequent ASHA statement (2005), many misconceptions remain regarding APDs in school-age children among health and academic professionals. While…

  4. Test Review: R. W. Keith "SCAN-3 for Adolescents and Adults--Tests for Auditory Processing Disorders". San Antonio, TX: Pearson, 2009

    Science.gov (United States)

    Lovett, Benjamin J.; Johnson, Theodore L.

    2010-01-01

    The SCAN-3 is a battery of tasks used for the screening and diagnosis of auditory processing disorder. It is available in two versions, one for children (the SCAN-3: C) and one for adolescents and adults (the SCAN-3: A); the latter version of the SCAN-3 is reviewed in this article, although it is very similar to the child version. The primary…

  5. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  6. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  7. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  8. Experimental analysis of the auditory detection process on avian point counts

    Science.gov (United States)

    Simons, T.R.; Alldredge, M.W.; Pollock, K.H.; Wettroth, J.M.

    2007-01-01

    We have developed a system for simulating the conditions of avian surveys in which birds are identified by sound. The system uses a laptop computer to control a set of amplified MP3 players placed at known locations around a survey point. The system can realistically simulate a known population of songbirds under a range of factors that affect detection probabilities. The goals of our research are to describe the sources and range of variability affecting point-count estimates and to find applications of sampling theory and methodologies that produce practical improvements in the quality of bird-census data. Initial experiments in an open field showed that, on average, observers tend to undercount birds on unlimited-radius counts, though the proportion of birds counted by individual observers ranged from 81% to 132% of the actual total. In contrast to the unlimited-radius counts, when data were truncated at a 50-m radius around the point, observers overestimated the total population by 17% to 122%. Results also illustrate how detection distances decline and identification errors increase with increasing levels of ambient noise. Overall, the proportion of birds heard by observers decreased by 28 ?? 4.7% under breezy conditions, 41 ?? 5.2% with the presence of additional background birds, and 42 ?? 3.4% with the addition of 10 dB of white noise. These findings illustrate some of the inherent difficulties in interpreting avian abundance estimates based on auditory detections, and why estimates that do not account for variations in detection probability will not withstand critical scrutiny. ?? The American Ornithologists' Union, 2007.

  9. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  10. A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications

    Science.gov (United States)

    2014-09-20

    A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications Dorsa Sadigh Eric Kim Samuel...2014 to 00-00-2014 4. TITLE AND SUBTITLE A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic...ABSTRACT We propose to synthesize a control policy for a Markov decision process (MDP) such that the resulting traces of the MDP satisfy a linear

  11. A top-down hierarchical spatio-temporal process description method and its data organization

    Science.gov (United States)

    Xie, Jiong; Xue, Cunjin

    2009-10-01

    Modeling and representing spatio-temporal process is the key foundation for analyzing geographic phenomenon and acquiring spatio-temporal high-level knowledge. Spatio-temporal representation methods with bottom-up approach based on object modeling view lack of explicit definition of geographic phenomenon and finer-grained representation of spatio-temporal causal relationships. Based on significant advances in data modeling of spatio-temporal object and event, aimed to represent discrete regional dynamic phenomenon composed with group of spatio-temporal objects, a regional spatio-temporal process description method using Top-Down Hierarchical approach (STP-TDH) is proposed and a data organization structure based on relational database is designed and implemented which builds up the data structure foundation for carrying out advanced data utilization and decision-making. The land use application case indicated that process modeling with top-down approach was proved to be good with the spatio-temporal cognition characteristic of our human, and its hierarchical representation framework can depict dynamic evolution characteristic of regional phenomenon with finer-grained level and can reduce complexity of process description.

  12. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  13. Elliptic Bessel processes and elliptic Dyson models realized as temporally inhomogeneous processes

    CERN Document Server

    Katori, Makoto

    2016-01-01

    The Bessel process with parameter $D>1$ and the Dyson model of interacting Brownian motions with coupling constant $\\beta >0$ are extended to the processes, in which the drift term and the interaction terms are given by the logarithmic derivatives of Jacobi's theta functions. They are called the elliptic Bessel process, eBES$^{(D)}$, and the elliptic Dyson model, eDYS$^{(\\beta)}$, respectively. Both are realized on the circumference of a circle $[0, 2 \\pi r)$ with radius $r >0$ as temporally inhomogeneous processes defined in a finite time interval $[0, t_*), t_* < \\infty$. Transformations of them to Schr\\"odinger-type equations with time-dependent potentials lead us to proving that eBES$^{(D)}$ and eDYS$^{(\\beta)}$ can be constructed as the time-dependent Girsanov transformations of Brownian motions. In the special cases where $D=3$ and $\\beta=2$, observables of the processes are defined and the processes are represented for them using the Brownian paths winding round a circle and pinned at time $t_*$. We...

  14. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    Science.gov (United States)

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic

  15. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

  16. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  17. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  18. Temporal structure in audiovisual sensory selection.

    Directory of Open Access Journals (Sweden)

    Anne Kösem

    Full Text Available In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object: the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar. Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.

  19. Auditory processing and audiovisual integration revealed by combining psychophysical and fMRI experiments

    NARCIS (Netherlands)

    Tomaskovic, Sonja

    2006-01-01

    This thesis describes experiments conducted in order to investigate human perception and processing of the sound in the human brain. There are several stages in the sound processing. Firstly, in the ear, sound is recorded and transformed into the electrical signal, then information is transported to

  20. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  1. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing.

    Science.gov (United States)

    Stevenson, Ryan A; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Camarata, Stephen; Wallace, Mark T

    2016-07-01

    A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Effects of spatial response coding on distractor processing: evidence from auditory spatial negative priming tasks with keypress, joystick, and head movement responses.

    Science.gov (United States)

    Möller, Malte; Mayr, Susanne; Buchner, Axel

    2015-01-01

    Prior studies of spatial negative priming indicate that distractor-assigned keypress responses are inhibited as part of visual, but not auditory, processing. However, recent evidence suggests that static keypress responses are not directly activated by spatially presented sounds and, therefore, might not call for an inhibitory process. In order to investigate the role of response inhibition in auditory processing, we used spatially directed responses that have been shown to result in direct response activation to irrelevant sounds. Participants localized a target sound by performing manual joystick responses (Experiment 1) or head movements (Experiment 2B) while ignoring a concurrent distractor sound. Relations between prime distractor and probe target were systematically manipulated (repeated vs. changed) with respect to identity and location. Experiment 2A investigated the influence of distractor sounds on spatial parameters of head movements toward target locations and showed that distractor-assigned responses are immediately inhibited to prevent false responding in the ongoing trial. Interestingly, performance in Experiments 1 and 2B was not generally impaired when the probe target appeared at the location of the former prime distractor and required a previously withheld and presumably inhibited response. Instead, performance was impaired only when prime distractor and probe target mismatched in terms of location or identity, which fully conforms to the feature-mismatching hypothesis. Together, the results suggest that response inhibition operates in auditory processing when response activation is provided but is presumably too short-lived to affect responding on the subsequent trial.

  3. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  4. Temporal aggregation in a periodically integrated autoregressive process

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); H.P. Boswijk (Peter)

    1996-01-01

    textabstractA periodically integrated autoregressive process for a time series which is observed S times per year assumes the presence of S - 1 cointegration relations between the annual series containing the seasonal observations, with the additional feature that these relations are different acros

  5. Speech Processing Disorder in Neural Hearing Loss

    Directory of Open Access Journals (Sweden)

    Joseph P. Pillion

    2012-01-01

    Full Text Available Deficits in central auditory processing may occur in a variety of clinical conditions including traumatic brain injury, neurodegenerative disease, auditory neuropathy/dyssynchrony syndrome, neurological disorders associated with aging, and aphasia. Deficits in central auditory processing of a more subtle nature have also been studied extensively in neurodevelopmental disorders in children with learning disabilities, ADD, and developmental language disorders. Illustrative cases are reviewed demonstrating the use of an audiological test battery in patients with auditory neuropathy/dyssynchrony syndrome, bilateral lesions to the inferior colliculi, and bilateral lesions to the temporal lobes. Electrophysiological tests of auditory function were utilized to define the locus of dysfunction at neural levels ranging from the auditory nerve, midbrain, and cortical levels.

  6. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    Science.gov (United States)

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models.

  7. The Automatic and Controlled Processing of Temporal and Spatial Patterns.

    Science.gov (United States)

    1980-02-01

    Atkinson and Juola, 1973; Slhffrin and Geisler, 1973; and Corballis, 1975; Posner and Snyder, 1975). Schneider and Shiffrin (1977; Shiffrin and Schneider...Besides the frame size, Schneider and Shiffrin (1977) also varied the memory set size to study the differential load requirements of CM and VM...theoretical level, Shiffrin and Schneider (1977) described an automatic process as a sequence of memory nodes that nearly always become active in

  8. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  9. Oxytocin receptor gene associated with the efficiency of social auditory processing

    NARCIS (Netherlands)

    M. Tops (Mattie); M.H. van IJzendoorn (Marinus); M.M.E. Riem (Madelon); M.A.S. Boksem (Maarten); M.J. Bakermans-Kranenburg (Marian)

    2011-01-01

    textabstractOxytocin has been shown to facilitate social aspects of sensory processing, thereby enhancing social communicative behaviors and empathy. Here we report that compared to the AA/AG genotypes, the presumably more efficient GG genotype of an oxytocin receptor gene polymorphism (OXTR rs53576

  10. Influence of Concurrent Auditory Input on Tactual Processing in Very Young Children: Developmental Changes.

    Science.gov (United States)

    Rose, Susan A.

    1985-01-01

    Right-hemispheric specialization for tactual processing was investigated in right-handed preschool children. Cross-modal transfer from touch to vision was assessed while children palpated shapes with hand while music was simultaneously played to ear. Left-hand advantage and lateralized nature of interference among older children supports…

  11. Gaussian Process Based Independent Analysis for Temporal Source Separation in fMRI.

    Science.gov (United States)

    Hald, Ditte Høvenhoff; Henao, Ricardo; Winther, Ole

    2017-02-26

    Functional Magnetic Resonance Imaging (fMRI) gives us a unique insight into the processes of the brain, and opens up for analyzing the functional activation patterns of the underlying sources. Task-inferred supervised learning with restrictive assumptions in the regression set-up, restricts the exploratory nature of the analysis. Fully unsupervised independent component analysis (ICA) algorithms, on the other hand, can struggle to detect clear classifiable components on single-subject data. We attribute this shortcoming to inadequate modeling of the fMRI source signals by failing to incorporate its temporal nature. fMRI source signals, biological stimuli and non-stimuli-related artifacts are all smooth over a time-scale compatible with the sampling time (TR). We therefore propose Gaussian process ICA (GPICA), which facilitates temporal dependency by the use of Gaussian process source priors. On two fMRI data sets with different sampling frequency, we show that the GPICA-inferred temporal components and associated spatial maps allow for a more definite interpretation than standard temporal ICA methods. The temporal structures of the sources are controlled by the covariance of the Gaussian process, specified by a kernel function with an interpretable and controllable temporal length scale parameter. We propose a hierarchical model specification, considering both instantaneous and convolutive mixing, and we infer source spatial maps, temporal patterns and temporal length scale parameters by Markov Chain Monte Carlo. A companion implementation made as a plug-in for SPM can be downloaded from https://github.com/dittehald/GPICA.

  12. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  13. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  14. The time course of temporal attention effects on nonconscious prime processing.

    Science.gov (United States)

    Schubert, Torsten; Palazova, Marina; Hutt, Axel

    2013-11-01

    We presented a masked prime at various prime-target intervals (PTIs) before a target that required a speeded motor response and investigated the impact of temporal attention on the nonconscious prime processing. The allocation of temporal attention to the target was manipulated by presenting an accessory tone and comparing that condition with a no-tone condition. The results showed that, independently of the visibility of the prime, temporal attention led to an enhanced effect of prime-target congruency on the reaction times, and that the amount of the enhancement increased with increasing PTIs. This effect pattern is consistent with the assumption of increasing influences of temporal attention and of the increasing PTI on nonconscious prime processing; it argues against the hypothesis that temporal attention narrows the time period in which the prime may affect target processing. An accumulator model is proposed assuming that target-related temporal attention increases the accumulation rate for masked primes and, thus, enhances the impact of the prime on the speed of choice decisions.

  15. Hierarchical network model for the analysis of human spatio-temporal information processing

    Science.gov (United States)

    Schill, Kerstin; Baier, Volker; Roehrbein, Florian; Brauer, Wilfried

    2001-06-01

    The perception of spatio-temporal pattern is a fundamental part of visual cognition. In order to understand more about the principles behind these biological processes, we are analyzing and modeling the presentation of spatio-temporal structures on different levels of abstraction. For the low- level processing of motion information we have argued for the existence of a spatio-temporal memory in early vision. The basic properties of this structure are reflected in a neural network model which is currently developed. Here we discuss major architectural features of this network which is base don Kohonens SOMs. In order to enable the representation, processing and prediction of spatio-temporal pattern on different levels of granularity and abstraction the SOMs are organized in a hierarchical manner. The model has the advantage of a 'self-teaching' learning algorithm and stored temporal information try local feedback in each computational layer. The constraints for the neural modeling and data set for training the neural network are obtained by psychophysical experiments where human subjects' abilities for dealing with spatio-temporal information is investigated.

  16. Auditory ERPs during rhyme and semantic processing: effects of reading ability in college students.

    Science.gov (United States)

    Lovrich, D; Cheng, J C; Velting, D M; Kazmerski, V

    1997-06-01

    Event-related potential (ERP), reaction time (RT), and response accuracy measures were obtained during the phonological and semantic categorization of spoken words in 14 undergraduates: 7 were average readers and 7 were reading-impaired. For the impaired readers, motor responses were significantly slower and less accurate than were those of the average readers in both classification tasks. ERPs obtained during rhyme processing displayed a relatively larger amplitude negativity at about 480 ms for the impaired readers as compared to the average readers, whereas semantic processing resulted in no major group differences in the ERPs at this latency. Also, N480 amplitude was larger during semantic relative to phonological classification for the average readers but not for the impaired readers. Results are compared to a previous study of reading-impaired children on the same tasks.

  17. Dissociating neural mechanisms of temporal sequencing and processing phonemes.

    Science.gov (United States)

    Gelfand, Jenna R; Bookheimer, Susan Y

    2003-06-01

    Using fMRI, we sought to determine whether the posterior, superior portion of Broca's area performs operations on phoneme segments specifically or implements processes general to sequencing discrete units. Twelve healthy volunteers performed two sequence manipulation tasks and one matching task, using strings of syllables and hummed notes. The posterior portion of Broca's area responded specifically to the sequence manipulation tasks, independent of whether the stimuli were composed of phonemes or hummed notes. In contrast, the left supramarginal gyrus was somewhat more specific to sequencing phoneme segments. These results suggest a functional dissociation of the canonical left hemisphere language regions encompassing the "phonological loop," with the left posterior inferior frontal gyrus responding not to the sound structure of language but rather to sequential operations that may underlie the ability to form words out of dissociable elements.

  18. Controling contagious processes on temporal networks via adaptive rewiring

    CERN Document Server

    Belik, Vitaly; Hövel, Philipp

    2015-01-01

    We consider recurrent contagious processes on a time-varying network. As a control procedure to mitigate the epidemic, we propose an adaptive rewiring mechanism for temporary isolation of infected nodes upon their detection. As a case study, we investigate the network of pig trade in Germany. Based on extensive numerical simulations for a wide range of parameters, we demonstrate that the adaptation mechanism leads to a significant extension of the parameter range, for which most of the index nodes (origins of the epidemic) lead to vanishing epidemics. We find that diseases with detection times around a week and infectious periods up to 3 months can be effectively controlled. Furthermore the performance of adaptation is very heterogeneous with respect to the index node. We identify index nodes that are most responsive to the adaptation strategy and quantify the success of the proposed adaptation scheme in dependence on the infectious period and detection times.

  19. Event-Related Brain Potentials Reveal Anomalies in Temporal Processing of Faces in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James; Dawson, Geraldine; Webb, Sara J.; Panagiotides, Heracles; Carver, Leslie J.

    2004-01-01

    Background: Individuals with autism exhibit impairments in face recognition, and neuroimaging studies have shown that individuals with autism exhibit abnormal patterns of brain activity during face processing. The current study examined the temporal characteristics of face processing in autism and their relation to behavior. Method: High-density…

  20. Spike-coding mechanisms of cerebellar temporal processing in classical conditioning and voluntary movements.

    Science.gov (United States)

    Yamaguchi, Kenji; Sakurai, Yoshio

    2014-10-01

    Time is a fundamental and critical factor in daily life. Millisecond timing, which is the underlying temporal processing for speaking, dancing, and other activities, is reported to rely on the cerebellum. In this review, we discuss the cerebellar spike-coding mechanisms for temporal processing. Although the contribution of the cerebellum to both classical conditioning and voluntary movements is well known, the difference of the mechanisms for temporal processing between classical conditioning and voluntary movements is not clear. Therefore, we review the evidence of cerebellar temporal processing in studies of classical conditioning and voluntary movements and report the similarities and differences between them. From some studies, which used tasks that can change some of the temporal properties (e.g., the duration of interstimulus intervals) with keeping identical movements, we concluded that classical conditioning and voluntary movements may share a common spike-coding mechanism because simple spikes in Purkinje cells decrease at predicted times for responses regardless of the intervals between responses or stimulation.

  1. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  2. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise

    Directory of Open Access Journals (Sweden)

    U Ajith Kumar

    2012-01-01

    Full Text Available Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13, 41 50 ( = 13, 41-50 (n = 9, and 51-60 (n = 6 years and their non-noise-exposed counterparts (n = 30 in each age group. Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  3. The "Musical Emotional Bursts": a validated set of musical affect bursts to investigate auditory affective processing.

    Science.gov (United States)

    Paquette, Sébastien; Peretz, Isabelle; Belin, Pascal

    2013-01-01

    The Musical Emotional Bursts (MEB) consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear) and neutrality. These musical bursts were designed to be the musical analog of the Montreal Affective Voices (MAV)-a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 s) improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (10 stimuli × 4 [3 emotions + neutral]), or a clarinet (10 stimuli × 4 [3 emotions + neutral]). The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, non-linguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli [30 stimuli × 4 (3 emotions + neutral) × 2 instruments] by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task); 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80) was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0%) and fearful or sad violin (88.0% each) MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  4. The Musical Emotional Bursts: A validated set of musical affect bursts to investigate auditory affective processing.

    Directory of Open Access Journals (Sweden)

    Sébastien ePaquette

    2013-08-01

    Full Text Available The Musical Emotional Bursts (MEB consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear and neutrality. These musical bursts were designed to be the musical analogue of the Montreal Affective Voices (MAV – a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 sec improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (n:40 or a clarinet (n:40. The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, nonlinguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli (30 stimuli x 4 [3 emotions + neutral] x 2 instruments by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task; 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80 was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0% and fearful or sad violin (88.0% each MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.

  5. Processamento auditivo em gagos: análise do desempenho das orelhas direita e esquerda Auditory processing in stutterers: performance of right and left ears

    Directory of Open Access Journals (Sweden)

    Adriana Neves de Andrade

    2008-03-01

    Full Text Available OBJETIVO: Comparar a diferença entre as orelhas nos testes comportamentais do processamento auditivo e os resultados de sujeitos com diferentes graus de gravidade de gagueira em cada teste do processamento auditivo. MÉTODOS: Cinqüenta e seis indivíduos, com idades entre quatro e 34 anos, foram encaminhados pelo Ambulatório de Avaliação Fonoaudiológica da UNIFESP para avaliação comportamental do processamento auditivo. Todos os pacientes foram submetidos à avaliação de audição, fala e linguagem. A disfluência foi classificada segundo o protocolo de Riley (1994, o qual prevê os seguintes graus de gravidade da gagueira: muito leve, leve, moderado, severo e muito severo. Os testes para avaliação do processamento auditivo foram selecionados e analisados de acordo com a idade do paciente e a proposta de Pereira & Schochat (1997. RESULTADOS: Observamos prevalência da gagueira de grau leve nas faixas etárias de quatro a sete anos e de 12 a 34 anos de idade, e de grau moderado nos indivíduos de oito a 11 anos de idade. Dos 56 indivíduos avaliados 92,85% apresentaram alteração do processamento auditivo. Houve diferença estatisticamente significante entre as orelhas direita e esquerda na etapa de atenção direcionada do teste dicótico não verbal, em todas as faixas etárias estudadas. Não foram encontradas diferenças significativas entre os graus de gravidade da gagueira em nenhum dos testes de processamento auditivo. CONCLUSÕES: A orelha direita apresentou melhor desempenho do que a esquerda nos diferentes testes comportamentais. O grau de gravidade da gagueira não interferiu no resultado de cada teste.PURPOSE: To compare the difference between the performances of right and left ears in behavioral tests of auditory processing and to compare the results obtained by subjects with different stuttering severity classifications in each auditory processing test. METHODS: Fifty six subjects (49 male, 7 female, with ages ranging

  6. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  7. Contribution of bioanthropology to the reconstruction of prehistoric productive processes. The external auditory exostoses in the prehispanic population of Gran Canaria

    Directory of Open Access Journals (Sweden)

    Velasco Vázquez, Javier

    2001-06-01

    Full Text Available The aim of this paper is an approach to the role of bioanthropological studies in the reconstruction of the productive processes of past societies. This objective is obtained starting from the survey and valuation of the prevalence of bone exostoses in the auditory canal among the prehistoric inhabitants of Gran Canaria. The auditory exostose is a bone wound well documented through clinical and experimental studies, closely related to the exposure of the auditory canal to cold water. The estimation of this bone anomaly among the analysed population, leads to the definition of outstanding territorial variations in the economic strategies of these human groups.

    En el presente trabajo se pretende abordar el papel de los estudios bioantropológicos en la reconstrucción de los procesos productivos de las sociedades del pasado. Esta finalidad es perseguida a partir del examen y valoración de la prevalencia de exostosis óseas en el canal auditivo en la población prehistórica de Gran Canaria. Las exostosis auditivas constituyen una lesión ósea, bien documentada en trabajos experimentales y clínicos, estrechamente relacionada con la exposición del canal auditivo al agua fría. La estimación de esta anormalidad ósea en el conjunto poblacional analizado permite la definición de importantes variaciones territoriales en las estrategias económicas emprendidas por estos grupos humanos.

  8. Osteocyte apoptosis and absence of bone remodeling in human auditory ossicles and scleral ossicles of lower vertebrates: a mere coincidence or linked processes?

    Science.gov (United States)

    Palumbo, Carla; Cavani, Francesco; Sena, Paola; Benincasa, Marta; Ferretti, Marzia

    2012-03-01

    Considering the pivotal role as bone mechanosensors ascribed to osteocytes in bone adaptation to mechanical strains, the present study analyzed whether a correlation exists between osteocyte apoptosis and bone remodeling in peculiar bones, such as human auditory ossicles and scleral ossicles of lower vertebrates, which have been shown to undergo substantial osteocyte death and trivial or no bone turnover after cessation of growth. The investigation was performed with a morphological approach under LM (by means of an in situ end-labeling technique) and TEM. The results show that a large amount of osteocyte apoptosis takes place in both auditory and scleral ossicles after they reach their final size. Additionally, no morphological signs of bone remodeling were observed. These facts suggest that (1) bone remodeling is not necessarily triggered by osteocyte death, at least in these ossicles, and (2) bone remodeling does not need to mechanically adapt auditory and scleral ossicles since they appear to be continuously submitted to stereotyped stresses and strains; on the contrary, during the resorption phase, bone remodeling might severely impair the mechanical resistance of extremely small bony segments. Thus, osteocyte apoptosis could represent a programmed process devoted to make stable, when needed, bone structure and mechanical resistance.

  9. Research of Cadastral Data Modelling and Database Updating Based on Spatio-temporal Process

    Directory of Open Access Journals (Sweden)

    ZHANG Feng

    2016-02-01

    Full Text Available The core of modern cadastre management is to renew the cadastre database and keep its currentness,topology consistency and integrity.This paper analyzed the changes and their linkage of various cadastral objects in the update process.Combined object-oriented modeling technique with spatio-temporal objects' evolution express,the paper proposed a cadastral data updating model based on the spatio-temporal process according to people's thought.Change rules based on the spatio-temporal topological relations of evolution cadastral spatio-temporal objects are drafted and further more cascade updating and history back trace of cadastral features,land use and buildings are realized.This model implemented in cadastral management system-ReGIS.Achieved cascade changes are triggered by the direct driving force or perceived external events.The system records spatio-temporal objects' evolution process to facilitate the reconstruction of history,change tracking,analysis and forecasting future changes.

  10. Temporal Beta Diversity of Bird Assemblages in Agricultural Landscapes: Land Cover Change vs. Stochastic Processes.

    Science.gov (United States)

    Baselga, Andrés; Bonthoux, Sébastien; Balent, Gérard

    2015-01-01

    Temporal variation in the composition of species assemblages could be the result of deterministic processes driven by environmental change and/or stochastic processes of colonization and local extinction. Here, we analyzed the relative roles of deterministic and stochastic processes on bird assemblages in an agricultural landscape of southwestern France. We first assessed the impact of land cover change that occurred between 1982 and 2007 on (i) the species composition (presence/absence) of bird assemblages and (ii) the spatial pattern of taxonomic beta diversity. We also compared the observed temporal change of bird assemblages with a null model accounting for the effect of stochastic dynamics on temporal beta diversity. Temporal assemblage dissimilarity was partitioned into two separate components, accounting for the replacement of species (i.e. turnover) and for the nested species losses (or gains) from one time to the other (i.e. nestedness-resultant dissimilarity), respectively. Neither the turnover nor the nestedness-resultant components of temporal variation were accurately explained by any of the measured variables accounting for land cover change (r(2)turnover and 13% of sites for nestedness-resultant dissimilarity. Taken together, our results suggest that land cover change in this agricultural landscape had little impact on temporal beta diversity of bird assemblages. Although other unmeasured deterministic process could be driving the observed patterns, it is also possible that the observed changes in presence/absence species composition of local bird assemblages might be the consequence of stochastic processes in which species populations appeared and disappeared from specific localities in a random-like way. Our results might be case-specific, but if stochastic dynamics are generally dominant, the ability of correlative and mechanistic models to predict land cover change effects on species composition would be compromised.

  11. Temporal processing in the olfactory system: can we see a smell?

    Science.gov (United States)

    Gire, David H; Restrepo, Diego; Sejnowski, Terrence J; Greer, Charles; De Carlos, Juan A; Lopez-Mascaraque, Laura

    2013-05-01

    Sensory processing circuits in the visual and olfactory systems receive input from complex, rapidly changing environments. Although patterns of light and plumes of odor create different distributions of activity in the retina and olfactory bulb, both structures use what appears on the surface similar temporal coding strategies to convey information to higher areas in the brain. We compare temporal coding in the early stages of the olfactory and visual systems, highlighting recent progress in understanding the role of time in olfactory coding during active sensing by behaving animals. We also examine studies that address the divergent circuit mechanisms that generate temporal codes in the two systems, and find that they provide physiological information directly related to functional questions raised by neuroanatomical studies of Ramon y Cajal over a century ago. Consideration of differences in neural activity in sensory systems contributes to generating new approaches to understand signal processing.

  12. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    Science.gov (United States)

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  13. Reorganisation of the right occipito-parietal stream for auditory spatial processing in early blind humans. A transcranial magnetic stimulation study.

    Science.gov (United States)

    Collignon, O; Davare, M; Olivier, E; De Volder, A G

    2009-05-01

    It is well known that, following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In particular, several studies have demonstrated an extensive activation of occipital brain areas, usually regarded as essentially "visual", when early blind subjects (EB) performed a task that requires spatial processing of sounds. However, little is known about the possible consequences of the activation of occipitals area on the function of the large cortical network known, in sighted subjects, to be involved in the processing of auditory spatial information. To address this issue, we used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right intra-parietal sulcus (rIPS) or the right dorsal extrastriate occipital cortex (rOC) at different delays in EB subjects performing a sound lateralization task. Surprisingly, TMS applied over rIPS, a region critically involved in the spatial processing of sound in sighted subjects, had no influence on the task performance in EB. In contrast, TMS applied over rOC 50 ms after sound onset, disrupted the spatial processing of sounds originating from the contralateral hemifield. The present study shed new lights on the reorganisation of the cortical network dedicated to the spatial processing of sounds in EB by showing an early contribution of rOC and a lesser involvement of rIPS.

  14. Auditory Processing Disorders

    Science.gov (United States)

    ... hearing loss. APD is often associated with various learning disabilities. Children with APD experience difficulties in less-than-ideal (noisy) listening situations and may have difficulties with reading, spelling, attention, and language problems. APD is common in ...

  15. Temporal Beta Diversity of Bird Assemblages in Agricultural Landscapes: Land Cover Change vs. Stochastic Processes.

    Directory of Open Access Journals (Sweden)

    Andrés Baselga

    Full Text Available Temporal variation in the composition of species assemblages could be the result of deterministic processes driven by environmental change and/or stochastic processes of colonization and local extinction. Here, we analyzed the relative roles of deterministic and stochastic processes on bird assemblages in an agricultural landscape of southwestern France. We first assessed the impact of land cover change that occurred between 1982 and 2007 on (i the species composition (presence/absence of bird assemblages and (ii the spatial pattern of taxonomic beta diversity. We also compared the observed temporal change of bird assemblages with a null model accounting for the effect of stochastic dynamics on temporal beta diversity. Temporal assemblage dissimilarity was partitioned into two separate components, accounting for the replacement of species (i.e. turnover and for the nested species losses (or gains from one time to the other (i.e. nestedness-resultant dissimilarity, respectively. Neither the turnover nor the nestedness-resultant components of temporal variation were accurately explained by any of the measured variables accounting for land cover change (r(2<0.06 in all cases. Additionally, the amount of spatial assemblage heterogeneity in the region did not significantly change between 1982 and 2007, and site-specific observed temporal dissimilarities were larger than null expectations in only 1% of sites for temporal turnover and 13% of sites for nestedness-resultant dissimilarity. Taken together, our results suggest that land cover change in this agricultural landscape had little impact on temporal beta diversity of bird assemblages. Although other unmeasured deterministic process could be driving the observed patterns, it is also possible that the observed changes in presence/absence species composition of local bird assemblages might be the consequence of stochastic processes in which species populations appeared and disappeared from specific

  16. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  17. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli

    NARCIS (Netherlands)

    Talsma, Durk; Senkowski, Daniel; Woldorff, Marty G.

    2009-01-01

    The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory

  18. Temporally selective attention supports speech processing in 3- to 5-year-old children.

    Science.gov (United States)

    Astheimer, Lori B; Sanders, Lisa D

    2012-01-01

    Recent event-related potential (ERP) evidence demonstrates that adults employ temporally selective attention to preferentially process the initial portions of words in continuous speech. Doing so is an effective listening strategy since word-initial segments are highly informative. Although the development of this process remains unexplored, directing attention to word onsets may be important for speech processing in young children who would otherwise be overwhelmed by the rapidly changing acoustic signals that constitute speech. We examined the use of temporally selective attention in 3- to 5-year-old children listening to stories by comparing ERPs elicited by attention probes presented at four acoustically matched times relative to word onsets: concurrently with a word onset, 100 ms before, 100 ms after, and at random control times. By 80 ms, probes presented at and after word onsets elicited a larger negativity than probes presented before word onsets or at control times. The latency and distribution of this effect is similar to temporally and spatially selective attention effects measured in adults and, despite differences in polarity, spatially selective attention effects measured in children. These results indicate that, like adults, preschool aged children modulate temporally selective attention to preferentially process the initial portions of words in continuous speech.

  19. ABR and auditory P300 findings inchildren with ADHD

    OpenAIRE

    Schochat Eliane; Scheuer Claudia Ines; Andrade Ênio Roberto de

    2002-01-01

    Auditory processing disorders (APD), also referred as central auditory processing disorders (CAPD) and attention deficit hyperactivity disorders (ADHD) have become popular diagnostic entities for school age children. It has been demonstrated a high incidence of comorbid ADHD with communication disorders and auditory processing disorder. The aim of this study was to investigate ABR and P300 auditory evoked potentials in children with ADHD, in a double-blind study. Twenty-one children, ages bet...

  20. Aggressive osteoblastoma in mastoid process of temporal bone with facial palsy

    Directory of Open Access Journals (Sweden)

    Manoj Jain

    2013-01-01

    Full Text Available Osteoblastoma is an uncommon primary bone tumor with a predilection for posterior elements of spine. Its occurrence in temporal bone and middle ear is extremely rare. Clinical symptoms are non-specific and cranial nerve involvement is uncommon. The cytomorphological features of osteoblastoma are not very well defined and the experience is limited to only few reports. We report an interesting and rare case of aggressive osteoblastoma, with progressive hearing loss and facial palsy, involving the mastoid process of temporal bone and middle ear along with the description of cyto-morphological features.

  1. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  2. Spatiotemporal properties of the BOLD response in the songbirds' auditory circuit during a variety of listening tasks.

    Science.gov (United States)

    Van Meir, Vincent; Boumans, Tiny; De Groof, Geert; Van Audekerke, Johan; Smolders, Alain; Scheunders, Paul; Sijbers, Jan; Verhoye, Marleen; Balthazart, Jacques; Van der Linden, Annemie

    2005-05-01

    Auditory fMRI in humans has recently received increasing attention from cognitive neuroscientists as a tool to understand mental processing of learned acoustic sequences and analyzing speech recognition and development of musical skills. The present study introduces this tool in a well-documented animal model for vocal learning, the songbird, and provides fundamental insight in the main technical issues associated with auditory fMRI in these songbirds. Stimulation protocols with various listening tasks lead to appropriate activation of successive relays in the songbirds' auditory pathway. The elicited BOLD response is also region and stimulus specific, and its temporal aspects provide accurate measures of the changes in brain physiology induced by the acoustic stimuli. Extensive repetition of an identical stimulus does not lead to habituation of the response in the primary or secondary telencephalic auditory regions of anesthetized subjects. The BOLD signal intensity changes during a stimulation and subsequent rest period have a very specific time course which shows a remarkable resemblance to auditory evoked BOLD responses commonly observed in human subjects. This observation indicates that auditory fMRI in the songbird may establish a link between auditory related neuro-imaging studies done in humans and the large body of neuro-ethological research on song learning and neuro-plasticity performed in songbirds.

  3. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    Science.gov (United States)

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  4. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  5. A recurrent fuzzy network for fuzzy temporal sequence processing and gesture recognition.

    Science.gov (United States)

    Juang, Chia-Feng; Ku, Ksuan-Chun

    2005-08-01

    A fuzzified Takagi-Sugeno-Kang (TSK)-type recurrent fuzzy network (FTRFN) for handling fuzzy temporal information is proposed in this paper. The FTRFN extends our previously proposed network, TRFN, to deal with fuzzy temporal signals represented by Gaussian or triangular fuzzy numbers. In the precondition part of FTRFN, matching degrees between input fuzzy variables and fuzzy antecedent sets is performed by similarity measure. In the TSK-type consequence, a linear combination of fuzzy variables is computed, where two sets of combination coefficients, one for the center and the other for the width of each fuzzy number, are used. Derivation of the linear combination results and final network output is based on left-right fuzzy number operation. There are no rules in FTRFN initially; they are constructed online by concurrent structure and parameter learning, where all free parameters in the precondition/consequence of FTRFN are all tunable. FTRFN can be applied on a variety of domains related to fuzzy temporal information processing. In this paper, it has been applied on one-dimensional and two-dimensional fuzzy temporal sequence prediction and CCD-based temporal gesture recognition. The performance of FTRFN is verified from these examples.

  6. Ontology Mapping of Business Process Modeling Based on Formal Temporal Logic

    Directory of Open Access Journals (Sweden)

    Irfan Chishti

    2014-08-01

    Full Text Available A business process is the combination of a set of activities with logical order and dependence, whose objective is to produce a desired goal. Business process modeling (BPM using knowledge of the available process modeling techniques enables a common understanding and analysis of a business process. Industry and academics use informal and formal techniques respectively to represent business processes (BP, having the main objective to support an organization. Despite both are aiming at BPM, the techniques used are quite different in their semantics. While carrying out literature research, it has been found that there is no general representation of business process modeling is available that is expressive than the commercial modeling tools and techniques. Therefore, it is primarily conceived to provide an ontology mapping of modeling terms of Business Process Modeling Notation (BPMN, Unified Modeling Language (UML Activity Diagrams (AD and Event Driven Process Chains (EPC to temporal logic. Being a formal system, first order logic assists in thorough understanding of process modeling and its application. However, our contribution is to devise a versatile conceptual categorization of modeling terms/constructs and also formalizing them, based on well accepted business notions, such as action, event, process, connector and flow. It is demonstrated that the new categorization of modeling terms mapped to formal temporal logic, provides the expressive power to subsume business process modeling techniques i.e. BPMN, UML AD and EPC.

  7. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    Science.gov (United States)

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may

  8. Intermixing forms of memory processing within the functional organization of the medial temporal lobe memory system.

    Science.gov (United States)

    Eichenbaum, Howard

    2012-01-01

    Abstract Voss et al. discuss evidence indicating an intermixing of implicit and explicit memory processing, and of familiarity and recollection, in tests of memory. Here I support this view, and add that the anatomy of cortical-medial temporal lobe pathways indicates a hierarchical and bidirectional functional organization of memory in which implicit memory processing contributes to familiarity, and implicit memory and familiarity processing inherently contribute to recollection. Rather than look for new ways to separate these processes, it may be as important to understand how they are integrated.

  9. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners......) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...

  10. Neural Androgen Receptor Deletion Impairs the Temporal Processing of Objects and Hippocampal CA1-Dependent Mechanisms.

    Directory of Open Access Journals (Sweden)

    Marie Picot

    Full Text Available We studied the role of testosterone, mediated by the androgen receptor (AR, in modulating temporal order memory for visual objects. For this purpose, we used male mice lacking AR specifically in the nervous system. Control and mutant males were gonadectomized at adulthood and supplemented with equivalent amounts of testosterone in order to normalize their hormonal levels. We found that neural AR deletion selectively impaired the processing of temporal information for visual objects, without affecting classical object recognition or anxiety-like behavior and circulating corticosterone levels, which remained similar to those in control males. Thus, mutant males were unable to discriminate between the most recently seen object and previously seen objects, whereas their control littermates showed more interest in exploring previously seen objects. Because the hippocampal CA1 area has been associated with temporal memory for visual objects, we investigated whether neural AR deletion altered the functionality of this region. Electrophysiological analysis showed that neural AR deletion affected basal glutamate synaptic transmission and decreased the magnitude of N-methyl-D-aspartate receptor (NMDAR activation and high-frequency stimulation-induced long-term potentiation. The impairment of NMDAR function was not due to changes in protein levels of receptor. These results provide the first evidence for the modulation of temporal processing of information for visual objects by androgens, via AR activation, possibly through regulation of NMDAR signaling in the CA1 area in male mice.

  11. Navigated transcranial magnetic stimulation of the primary somatosensory cortex impairs perceptual processing of tactile temporal discrimination.

    Science.gov (United States)

    Hannula, Henri; Neuvonen, Tuomas; Savolainen, Petri; Tukiainen, Taru; Salonen, Oili; Carlson, Synnöve; Pertovaara, Antti

    2008-05-30

    Previous studies indicate that transcranial magnetic stimulation (TMS) with biphasic pulses applied approximately over the primary somatosensory cortex (S1) suppresses performance in vibrotactile temporal discrimination tasks; these previous results, however, do not allow separating perceptual influence from memory or decision-making. Moreover, earlier studies using external landmarks for directing biphasic TMS pulses to the cortex do not reveal whether the changes in vibrotactile task performance were due to action on S1 or an adjacent area. In the present study, we determined whether the S1 area representing a cutaneous test site is critical for perceptual processing of tactile temporal discrimination. Electrical test pulses were applied to the thenar skin of the hand and the subjects attempted to discriminate single from twin pulses. During discrimination task, monophasic TMS pulses or sham TMS pulses were directed anatomically accurately to the S1 area representing the thenar using magnetic resonance image-guided navigation. The subject's capacity to temporal discrimination was impaired with a decrease in the delay between the TMS pulse and the cutaneous test pulse from 50 to 0 ms. The result indicates that S1 area representing a cutaneous test site is involved in perceptual processing of tactile temporal discrimination.

  12. Neural Androgen Receptor Deletion Impairs the Temporal Processing of Objects and Hippocampal CA1-Dependent Mechanisms.

    Science.gov (United States)

    Picot, Marie; Billard, Jean-Marie; Dombret, Carlos; Albac, Christelle; Karameh, Nida; Daumas, Stéphanie; Hardin-Pouzet, Hélène; Mhaouty-Kodja, Sakina

    2016-01-01

    We studied the role of testosterone, mediated by the androgen receptor (AR), in modulating temporal order memory for visual objects. For this purpose, we used male mice lacking AR specifically in the nervous system. Control and mutant males were gonadectomized at adulthood and supplemented with equivalent amounts of testosterone in order to normalize their hormonal levels. We found that neural AR deletion selectively impaired the processing of temporal information for visual objects, without affecting classical object recognition or anxiety-like behavior and circulating corticosterone levels, which remained similar to those in control males. Thus, mutant males were unable to discriminate between the most recently seen object and previously seen objects, whereas their control littermates showed more interest in exploring previously seen objects. Because the hippocampal CA1 area has been associated with temporal memory for visual objects, we investigated whether neural AR deletion altered the functionality of this region. Electrophysiological analysis showed that neural AR deletion affected basal glutamate synaptic transmission and decreased the magnitude of N-methyl-D-aspartate receptor (NMDAR) activation and high-frequency stimulation-induced long-term potentiation. The impairment of NMDAR function was not due to changes in protein levels of receptor. These results provide the first evidence for the modulation of temporal processing of information for visual objects by androgens, via AR activation, possibly through regulation of NMDAR signaling in the CA1 area in male mice.

  13. Influence of fluvial environments on sediment archiving processes and temporal pollutant dynamics (Upper Loire River, France).

    Science.gov (United States)

    Dhivert, E; Grosbois, C; Rodrigues, S; Desmet, M

    2015-02-01

    Floodplains are often cored to build long-term pollutant trends at the basin scale. To highlight the influences of depositional environments on archiving processes, aggradation rates, archived trace element signals and vertical redistribution processes, two floodplain cores were sampled near in two different environments of the Upper Loire River (France): (i) a river bank ridge and (ii) a paleochannel connected by its downstream end. The base of the river bank core is composed of sandy sediments from the end of the Little Ice Age (late 18th century). This composition corresponds to a proximal floodplain aggradation (aggradation rate depends on the topography and connection degree to the river channel. The temporal dynamics of anthropogenic trace element enrichments recorded in the distal floodplain are initially synchronous and present similar levels. Although the river bank core shows general temporal trends, the paleochannel core has a better resolution for short-time variations of trace element signals. After local water depth regulation began in the early 1930s, differences of connection degree were enhanced between the two cores. Therefore, large trace element signal divergences are recorded across the floodplain. The paleochannel core shows important temporal variations of enrichment levels from the 1930s to the coring date. However, the river bank core has no significant temporal variations of trace element enrichments and lower contamination levels because of a lower deposition of contaminated sediments and a pedogenetic trace elements redistribution.

  14. Word recognition in competing babble and the effects of age, temporal processing, and absolute sensitivity.

    Science.gov (United States)

    Snell, Karen B; Mapes, Frances M; Hickman, Elizabeth D; Frisina, D Robert

    2002-08-01

    This study was designed to clarify whether speech understanding in a fluctuating background is related to temporal processing as measured by the detection of gaps in noise bursts. Fifty adults with normal hearing or mild high-frequency hearing loss served as subjects. Gap detection thresholds were obtained using a three-interval, forced-choice paradigm. A 150-ms noise burst was used as the gap carrier with the gap placed close to carrier onset. A high-frequency masker without a temporal gap was gated on and off with the noise bursts. A continuous white-noise floor was present in the background. Word scores for the subjects were obtained at a presentation level of 55 dB HL in competing babble levels of 50, 55, and 60 dB HL. A repeated measures analysis of covariance of the word scores examined the effects of age, absolute sensitivity, and temporal sensitivity. The results of the analysis indicated that word scores in competing babble decreased significantly with increases in babble level, age, and gap detection thresholds. The effects of absolute sensitivity on word scores in competing babble were not significant. These results suggest that age and temporal processing influence speech understanding in fluctuating backgrounds in adults with normal hearing or mild high-frequency hearing loss.

  15. Temporal Processing Ability Is Related to Ear-Asymmetry for Detecting Time Cues in Sound: A Mismatch Negativity (MMN) Study

    Science.gov (United States)

    Todd, Juanita; Finch, Brayden; Smith, Ellen; Budd, Timothy W.; Schall, Ulrich

    2011-01-01

    Temporal and spectral sound information is processed asymmetrically in the brain with the left-hemisphere showing an advantage for processing the former and the right-hemisphere for the latter. Using monaural sound presentation we demonstrate a context and ability dependent ear-asymmetry in brain measures of temporal change detection. Our measure…

  16. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...

  17. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...

  18. Verbal Learning Processes in Patients with Glioma of the Left and Right Temporal Lobes

    Science.gov (United States)

    Noll, Kyle R.; Weinberg, Jeffrey S.; Ziu, Mateo; Wefel, Jeffrey S.

    2016-01-01

    Recent research supports the utility of process variables in understanding mechanisms underlying memory impairments. The Hopkins Verbal Learning Test-Revised (HVLT-R) was administered to 84 patients with left (LTL, n = 58) or right temporal lobe glioma (RTL, n = 26) prior to surgical resection. Primary HVLT-R measures of learning and memory and numerous learning process indices were computed. Both groups exhibited frequent memory impairment (>30%), with greater severity in the LTL group. Patients with LTL glioma also exhibited lower semantic clustering scores than RTL patients, which were highly associated with Total Recall (ρ = 0.83) and Delayed Recall (ρ = 0.68). Learning slope and a novel measure of learning efficiency were also significantly associated with primary memory measures, though scores were similar across the LTL and RTL groups. While lesions to either temporal lobe impact verbal memory, semantic encoding appears to depend upon the integrity of LTL structures in particular. PMID:26537777

  19. Calling song recognition in female crickets: temporal tuning of identified brain neurons matches behavior.

    Science.gov (United States)

    Kostarakos, Konstantinos; Hedwig, Berthold

    2012-07-11

    Phonotactic orientation of female crickets is tuned to the temporal pattern of the male calling song. We analyzed the phonotactic selectivity of female crickets to varying temporal features of calling song patterns and compared it with the auditory response properties of the ascending interneuron AN1 (herein referred to as TH1-AC1) and four newly identified local brain neurons. The neurites of all brain neurons formed a ring-like branching pattern in the anterior protocerebrum that overlapped with the axonal arborizations of TH1-AC1. All brain neurons responded phasically to the sound pulses of a species-specific chirp. The spike activity of TH1-AC1 and the local interneuron, B-LI2, copied different auditory patterns regardless of their temporal structure. Two other neurons, B-LI3 and B-LC3, matched the temporal selectivity of the phonotactic responses but also responded to some nonattractive patterns. Neuron B-LC3 linked the bilateral auditory areas in the protocerebrum. One local brain neuron, B-LI4, received inhibitory as well as excitatory synaptic inputs. Inhibition was particularly pronounced for nonattractive pulse patterns, reducing its spike activity. When tested with different temporal patterns, B-LI4 exhibited bandpass response properties; its different auditory response functions significantly matched the tuning of phonotaxis. Temporal selectivity was established already for the second of two sound pulses separated by one species-specific pulse interval. Temporal pattern recognition in the cricket brain occurs within the anterior protocerebrum at the first stage of auditory processing. It is crucially linked to a change in auditory responsiveness during pulse intervals and based on fast interactions of inhibition and excitation.

  20. Achados na triagem imitanciométrica e de processamento auditivo em escolares Acoustic immitance and auditory processing screening findings in school children

    Directory of Open Access Journals (Sweden)

    Camila Lucia Etges

    2012-12-01

    Full Text Available OBJETIVOS: verificar os achados da triagem imitanciométrica e dos testes da avaliação simplificada de processamento auditivo em escolares. MÉTODO: participaram da pesquisa alunos de 1ª a 4ª séries, de sete a dez anos de idade, de uma escola de ensino público de Porto Alegre. Foram avaliados 130 escolares na triagem imitanciométrica, que foi constituída por timpanometria e pesquisa do reflexo acústico ipsilateral e avaliação simplificada do processamento auditivo, incluindo testes de localização sonora, memória sequencial para sons verbais e memória sequencial para sons não verbais. RESULTADOS: na triagem imitanciométrica 43,08% dos escolares passaram, tendo a curva tipo A como mais frequente. O reflexo acústico em 4000 Hz teve percentual de presença inferior comparado com os demais. Passaram nos testes da avaliação simplificada do processamento auditivo 76,15% das crianças. Além disso, foi observado que o teste no qual os escolares obtiveram pior desempenho foi o de memória sequencial para sons verbais. Falharam na triagem imitanciométrica e na avaliação simplificada de processamento auditivo 12,3% dos escolares. CONCLUSÃO: a curva timpanométrica tipo A foi a mais frequente na população estudada. Na avaliação simplificada do processamento auditivo a maioria dos sujeitos passou, tendo maior frequência de acertos no teste de localização sonora. Não houve associação estatística entre o resultado da triagem imitanciométrica e o resultado da avaliação simplificada de processamento auditivo.PURPOSE: to check acoustic immittance screening findings and results of the simplified evaluation of auditory processing in school children. METHOD: the subjects under this study were students from the 1st to the 4th grade, with ages ranging from seven to ten year-old, from a public school in Porto Alegre. 130 students were evaluated in the immitance screening, which consisted of a tympanometry and an ipsilateral

  1. Differential Processing of Consonance and Dissonance within the Human Superior Temporal Gyrus

    OpenAIRE

    Francine eFoo; David eKing-Stephens; Peter eWeber; Kenneth eLaxer; Josef eParvizi; Robert T Knight

    2016-01-01

    The auditory cortex is well-known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG) signals di...

  2. Are left fronto-temporal brain areas a prerequisite for normal music-syntactic processing?

    Science.gov (United States)

    Sammler, Daniela; Koelsch, Stefan; Friederici, Angela D

    2011-06-01

    An increasing number of neuroimaging studies in music cognition research suggest that "language areas" are involved in the processing of musical syntax, but none of these studies clarified whether these areas are a prerequisite for normal syntax processing in music. The present electrophysiological experiment tested whether patients with lesions in Broca's area (N=6) or in the left anterior temporal lobe (N=7) exhibit deficits in the processing of structure in music compared to matched healthy controls (N=13). A chord sequence paradigm was applied, and the amplitude and scalp topography of the Early Right Anterior Negativity (ERAN) was examined, an electrophysiological marker of musical syntax processing that correlates with activity in Broca's area and its right hemisphere homotope. Left inferior frontal gyrus (IFG) (but not anterior superior temporal gyrus - aSTG) patients with lesions older than 4 years showed an ERAN with abnormal scalp distribution, and subtle behavioural deficits in detecting music-syntactic irregularities. In one IFG patient tested 7 months post-stroke, the ERAN was extinguished and the behavioural performance remained at chance level. These combined results suggest that the left IFG, known to be crucial for syntax processing in language, plays also a functional role in the processing of musical syntax. Hence, the present findings are consistent with the notion that Broca's area supports the processing of syntax in a rather domain-general way.

  3. Spatio-temporal pattern of vestibular information processing after brief caloric stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Marcelli, Vincenzo [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Esposito, Fabrizio [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Department of Cognitive Neurosciences, University of Maastricht, Maastricht (Netherlands)], E-mail: fabrizio.esposito@unina.it; Aragri, Adriana [Department of Neurological Sciences, Second University of Naples, Naples (Italy); Furia, Teresa; Riccardi, Pasquale [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Tosetti, Michela; Biagi, Laura [I.R.C.S.S. ' Stella Maris' , Pisa (Italy); Marciano, Elio [Department of Neuroscience, University of Naples ' Federico II' , Naples (Italy); Di Salle, Francesco [Department of Cognitive Neurosciences, University of Maastricht, Maastricht (Netherlands); I.R.C.S.S. ' Stella Maris' , Pisa (Italy); Department of Neurosciences, University of Pisa, Pisa (Italy)

    2009-05-15

    Processing of vestibular information at the cortical and subcortical level is essential for head and body orientation in space and self-motion perception, but little is known about the neural dynamics of the brain regions of the vestibular system involved in this task. Neuroimaging studies using both galvanic and caloric stimulation have shown that several distinct cortical and subcortical structures can be activated during vestibular information processing. The insular cortex has been often targeted and presented as the central hub of the vestibular cortical system. Since very short pulses of cold water ear irrigation can generate a strong and prolonged vestibular response and a nystagmus, we explored the effects of this type of caloric stimulation for assessing the blood-oxygen-level-dependent (BOLD) dynamics of neural vestibular processing in a whole-brain event-related functional magnetic resonance imaging (fMRI) experiment. We evaluated the spatial layout and the temporal dynamics of the activated cortical and subcortical regions in time-locking with the instant of injection and were able to extract a robust pattern of neural activity involving the contra-lateral insular cortex, the thalamus, the brainstem and the cerebellum. No significant correlation with the temporal envelope of the nystagmus was found. The temporal analysis of the activation profiles highlighted a significantly longer duration of the evoked BOLD activity in the brainstem compared to the insular cortex suggesting a functional de-coupling between cortical and subcortical activity during the vestibular response.

  4. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    Science.gov (United States)

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge.

  5. Avaliação do processamento auditivo em idosos que relatam ouvir bem Auditory processing assessment in older people with no report of hearing disability

    Directory of Open Access Journals (Sweden)

    Maura Ligia Sanchez

    2008-12-01

    Full Text Available Em idosos, os resultados da avaliação comportamental das vias auditivas centrais são considerados de difícil interpretação devido à possível interferência do comprometimento das vias auditivas periféricas. OBJETIVO: Avaliar a eficiência das funções auditivas centrais de idosos que relatam ouvir bem. MATERIAL E MÉTODO: Estudo de casos que incluiu 40 indivíduos na faixa etária de 60 a 75 anos. Os pacientes foram submetidos à avaliação do processamento auditivo que constou de anamnese, exame otorrinolaringológico, audiometria tonal liminar, limiar de reconhecimento de fala, índice de reconhecimento de fala, imitanciometria, pesquisa de reflexos estapedianos, teste de identificação de sentenças sintéticas com mensagem competitiva ipsilateral, teste de padrões de freqüência e teste de dissílabos alternados por meio de tarefa dicótica. RESULTADOS: Gênero, faixa etária e perda auditiva não influenciaram os resultados dos testes de padrões de freqüência e dissílabos alternados por meio de tarefa dicótica; faixa etária e perda auditiva influenciaram os resultados do teste de identificação de sentenças com mensagem competitiva ipsilateral. Porcentagens de acertos abaixo dos padrões da normalidade de adultos foram observadas nos três testes que acessam as funções auditivas centrais. CONCLUSÃO: Indivíduos idosos que relatam ouvir bem apresentam prevalência relevante de sinais de ineficiência das funções auditivas centrais.In the elderly, the results of central auditory pathways behavioral assessments are considered to be difficult to read because of the possible interference of peripheral auditory pathway involvement. AIM: Assess the efficacy of the central auditory function in elderly patients who do not complain of hearing. MATERIALS AND METHODS: Case study involving 40 individuals within the age range of 60 to 75 years. The patients underwent auditory processing evaluation based on anamnesis

  6. Involvement of the superior temporal cortex and the occipital cortex in spatial hearing: evidence from repetitive transcranial magnetic stimulation.

    Science.gov (United States)

    Lewald, Jörg; Meister, Ingo G; Weidemann, Jürgen; Töpper, Rudolf

    2004-06-01

    The processing of auditory spatial information in cortical areas of the human brain outside of the primary auditory cortex remains poorly understood. Here we investigated the role of the superior temporal gyrus (STG) and the occipital cortex (OC) in spatial hearing using repetitive transcranial magnetic stimulation (rTMS). The right STG is known to be of crucial importance for visual spatial awareness, and has been suggested to be involved in auditory spatial perception. We found that rTMS of the right STG induced a systematic error in the perception of interaural time differences (a primary cue for sound localization in the azimuthal plane). This is in accordance with the recent view, based on both neurophysiological data obtained in monkeys and human neuroimaging studies, that information on sound location is processed within a dorsolateral "where" stream including the caudal STG. A similar, but opposite, auditory shift was obtained after rTMS of secondary visual areas of the right OC. Processing of auditory information in the OC has previously been shown to exist only in blind persons. Thus, the latter finding provides the first evidence of an involvement of the visual cortex in spatial hearing in sighted human subjects, and suggests a close interconnection of the neural representation of auditory and visual space. Because rTMS induced systematic shifts in auditory lateralization, but not a general deterioration, we propose that rTMS of STG or OC specifically affected neuronal circuits transforming auditory spatial coordinates in order to maintain alignment with vision.

  7. Relations between frequency selectivity, temporal fine-structure processing, and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf; Dau, Torsten

    2009-01-01

    and binaural TFS-processing deficits in the HI listeners, no relation was found between TFS processing and frequency selectivity. The effect of noise on TFS processing was not larger for the HI listeners than for the NH listeners. Finally, TFS-processing performance was correlated with speech reception......Frequency selectivity, temporal fine-structure (TFS) processing, and speech reception were assessed for six normal-hearing (NH) listeners, ten sensorineurally hearing-impaired (HI) listeners with similar high-frequency losses, and two listeners with an obscure dysfunction (OD). TFS processing...... was investigated at low frequencies in regions of normal hearing, through measurements of binaural masked detection, tone lateralization, and monaural frequency modulation (FM) detection. Lateralization and FM detection thresholds were measured in quiet and in background noise. Speech reception thresholds were...

  8. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  9. Processamento auditivo, leitura e escrita na síndrome de Silver-Russell: relato de caso Auditory processing, reading and writing in the Silver-Russell syndrome: case report

    Directory of Open Access Journals (Sweden)

    Patrícia Fernandes Garcia

    2012-03-01

    -language pathology aspects of auditory processing, reading and writing of a male patient diagnosed with Silver-Russell syndrome. With two months of age the patient presented weight-for-height deficit; broad forehead; small, prominent and low-set ears; high palate; discrete micrognathia; blue sclera; cafe-au-lait spots; overlapping of the first and second right toes; gastroesophageal reflux; high-pitched voice and cry; mild neuropsychomotor development delay; and difficulty to gain weight, receiving the diagnosis of the syndrome. In the psychological evaluation, conducted when he was 8 years old, the patient presented normal intellectual level, with cognitive difficulties involving sustained attention, concentration, immediate verbal memory, and emotional and behavioral processes. For an assessment of reading and writing and their underlying