WorldWideScience

Sample records for auditory temporal integration

  1. Visual and auditory temporal integration in healthy younger and older adults.

    Science.gov (United States)

    Saija, Jefta D; Başkent, Deniz; Andringa, Tjeerd C; Akyürek, Elkan G

    2017-09-04

    As people age, they tend to integrate successive visual stimuli over longer intervals than younger adults. It may be expected that temporal integration is affected similarly in other modalities, possibly due to general, age-related cognitive slowing of the brain. However, the previous literature does not provide convincing evidence that this is the case in audition. One hypothesis is that the primacy of time in audition attenuates the degree to which temporal integration in that modality extends over time as a function of age. We sought to settle this issue by comparing visual and auditory temporal integration in younger and older adults directly, achieved by minimizing task differences between modalities. Participants were presented with a visual or an auditory rapid serial presentation task, at 40-100 ms/item. In both tasks, two subsequent targets were to be identified. Critically, these could be perceptually integrated and reported by the participants as such, providing a direct measure of temporal integration. In both tasks, older participants integrated more than younger adults, especially when stimuli were presented across longer time intervals. This difference was more pronounced in vision and only marginally significant in audition. We conclude that temporal integration increases with age in both modalities, but that this change might be slightly less pronounced in audition.

  2. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  3. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    , much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing......) temporal pattern recognition where listeners have to identify properties of the actual patterns of level changes. Typically temporal processing is modeled by some sort of temporal summation or integration device. The results of the present experiments are to a large extent incompatible with this modeling......An important property of sound is its variation as a function of time, which carries much relevant information about the origin of a given sound. Further, in analyzing the ?meaning? of a sound perceptually, the temporal variation is of tremendous importance. In spite of its perceptual importance...

  4. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  5. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  6. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  7. Auditory event files: integrating auditory perception and action planning.

    Science.gov (United States)

    Zmigrod, Sharon; Hommel, Bernhard

    2009-02-01

    The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.

  8. Temporal prediction errors in visual and auditory cortices.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-04-14

    To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources [1]. Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  10. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  11. Implicit temporal expectation attenuates auditory attentional blink.

    Directory of Open Access Journals (Sweden)

    Dawei Shen

    Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.

  12. Auditory temporal processing in patients with temporal lobe epilepsy.

    Science.gov (United States)

    Lavasani, Azam Navaei; Mohammadkhani, Ghassem; Motamedi, Mahmoud; Karimi, Leyla Jalilvand; Jalaei, Shohreh; Shojaei, Fereshteh Sadat; Danesh, Ali; Azimi, Hadi

    2016-07-01

    Auditory temporal processing is the main feature of speech processing ability. Patients with temporal lobe epilepsy, despite their normal hearing sensitivity, may present speech recognition disorders. The present study was carried out to evaluate the auditory temporal processing in patients with unilateral TLE. The present study was carried out on 25 patients with epilepsy: 11 patients with right temporal lobe epilepsy and 14 with left temporal lobe epilepsy with a mean age of 31.1years and 18 control participants with a mean age of 29.4years. The two experimental and control groups were evaluated via gap-in-noise and duration pattern sequence tests. One-way ANOVA was run to analyze the data. The mean of the threshold of the GIN test in the control group was observed to be better than that in participants with LTLE and RTLE. Also, it was observed that the percentage of correct responses on the DPS test in the control group and in participants with RTLE was better than that in participants with LTLE. Patients with TLE have difficulties in temporal processing. Difficulties are more significant in patients with LTLE, likely because the left temporal lobe is specialized for the processing of temporal information. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Processing Temporal Modulations in Binaural and Monaural Auditory Stimuli by Neurons in the Inferior Colliculus and Auditory Cortex

    OpenAIRE

    Fitzpatrick, Douglas C.; Roberts, Jason M.; Kuwada, Shigeyuki; Kim, Duck O.; Filipovic, Blagoje

    2009-01-01

    Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating inter...

  14. Altered auditory and multisensory temporal processing in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Leslie D Kwakye

    2011-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by deficits in social reciprocity and communication, as well as repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli have also frequently been reported. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically-developing (TD children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies.

  15. Altered Auditory and Multisensory Temporal Processing in Autism Spectrum Disorders

    Science.gov (United States)

    Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Cascio, Carissa J.; Stone, Wendy L.; Wallace, Mark T.

    2011-01-01

    Autism spectrum disorders (ASD) are characterized by deficits in social reciprocity and communication, as well as by repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli also have been reported frequently. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically developing (TD) children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ) tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies. PMID:21258617

  16. Temporal coherence sensitivity in auditory cortex.

    Science.gov (United States)

    Barbour, Dennis L; Wang, Xiaoqin

    2002-11-01

    Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally

  17. Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…

  18. Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism

    Science.gov (United States)

    Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…

  19. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  20. Neural correlates of auditory temporal predictions during sensorimotor synchronization.

    Science.gov (United States)

    Pecenka, Nadine; Engel, Annerose; Keller, Peter E

    2013-01-01

    Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events) and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS) and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons). Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1) a distributed network of cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex) and (2) medial cortical areas (medial prefrontal cortex, posterior cingulate cortex). While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  1. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  2. EFFECTS OF PHYSICAL REHABILITATION INTEGRATED WITH RHYTHMIC AUDITORY STIMULATION ON SPATIO-TEMPORAL AND KINEMATIC PARAMETERS OF GAIT IN PARKINSON’S DISEASE

    Directory of Open Access Journals (Sweden)

    Massimiliano Pau

    2016-08-01

    Full Text Available Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD. In this context, the use of Rhythmic Auditory Stimulation (RAS has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns scarce information is available from a kinematic viewpoint. In this study we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of intensive rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4±11.1, Hoehn & Yahr 1-3. Gait kinematics was assessed before and at the end of the rehabilitation period and after a three-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively, which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion-extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  3. Enhanced auditory temporal gap detection in listeners with musical training.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn

    2014-08-01

    Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.

  4. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  5. Auditory Temporal Processing Abilities in Early Azari-Persian Bilinguals

    Directory of Open Access Journals (Sweden)

    Roya Sanayi

    2013-10-01

    Full Text Available Introduction: Auditory temporal resolution and auditory temporal ordering are two major components of the auditory temporal processing abilities that contribute to speech perception and language development. Auditory temporal resolution and auditory temporal ordering can be evaluated by gap-in-noise (GIN and pitch-pattern-sequence (PPS tests, respectively. In this survey, the effect of bilingualism as a potential confounding factor on auditory temporal processing abilities was investigated in early Azari-Persian bilinguals.   Materials and Methods:                                     In this cross-sectional non-interventional study, GIN and PPS tests were performed on 24 (12 men and 12 women early Azari-Persian bilingual persons and 24 (12 men and 12 women Persian monolingual subjects in the age range of 18–30 years, with a mean age of 24.57 years in bilingual and 24.68 years in monolingual subjects. Data were analyzed with t-test using SPSS software version 16.   Results: There was no statistically significant difference between mean gap threshold and mean percentages of the correct response of the GIN test and average percentage of correct responses in the PPS test between early Azari-Persian bilinguals and Persian monolinguals (P≥0.05.   Conclusion:  According to the findings of this study, bilingualism did not have notable effect on auditory temporal processing abilities.

  6. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.

    2007-01-01

    Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  7. Processing temporal modulations in binaural and monaural auditory stimuli by neurons in the inferior colliculus and auditory cortex.

    Science.gov (United States)

    Fitzpatrick, Douglas C; Roberts, Jason M; Kuwada, Shigeyuki; Kim, Duck O; Filipovic, Blagoje

    2009-12-01

    Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating interaural phase difference, while recording from neurons in the unanesthetized rabbit. We found that the cutoff frequency for neural synchronization to the binaural beat frequency (BBF) decreased between the IC and auditory cortex, and that this decrease was associated with an increase in the group delay. These features indicate that there is an increased temporal integration window in the cortex compared to the IC, complementing that seen with monaural signals. Comparable measurements of responses to amplitude modulation showed that the monaural and binaural temporal integration windows at the cortical level were quantitatively as well as qualitatively similar, suggesting that intrinsic membrane properties and afferent synapses to the cortical neurons govern the dynamic processing. The upper limits of synchronization to the BBF and the band-pass tuning characteristics of cortical neurons are a close match to human psychophysics.

  8. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  9. Auditory spectral versus spatial temporal order judgment: Threshold distribution analysis.

    Science.gov (United States)

    Fostick, Leah; Babkoff, Harvey

    2017-05-01

    Some researchers suggested that one central mechanism is responsible for temporal order judgments (TOJ), within and across sensory channels. This suggestion is supported by findings of similar TOJ thresholds in same modality and cross-modality TOJ tasks. In the present study, we challenge this idea by analyzing and comparing the threshold distributions of the spectral and spatial TOJ tasks. In spectral TOJ, the tones differ in their frequency ("high" and "low") and are delivered either binaurally or monaurally. In spatial (or dichotic) TOJ, the two tones are identical but are presented asynchronously to the two ears and thus differ with respect to which ear received the first tone and which ear received the second tone ("left"/"left"). Although both tasks are regarded as measures of auditory temporal processing, a review of data published in the literature suggests that they trigger different patterns of response. The aim of the current study was to systematically examine spectral and spatial TOJ threshold distributions across a large number of studies. Data are based on 388 participants in 13 spectral TOJ experiments, and 222 participants in 9 spatial TOJ experiments. None of the spatial TOJ distributions deviated significantly from the Gaussian; while all of the spectral TOJ threshold distributions were skewed to the right, with more than half of the participants accurately judging temporal order at very short interstimulus intervals (ISI). The data do not support the hypothesis that 1 central mechanism is responsible for all temporal order judgments. We suggest that different perceptual strategies are employed when performing spectral TOJ than when performing spatial TOJ. We posit that the spectral TOJ paradigm may provide the opportunity for two-tone masking or temporal integration, which is sensitive to the order of the tones and thus provides perceptual cues that may be used to judge temporal order. This possibility should be considered when interpreting

  10. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  11. Temporal Organization of Sound Information in Auditory Memory

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation. PMID:28674512

  12. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  13. MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM

    DEFF Research Database (Denmark)

    Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.

    2007-01-01

    An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...

  14. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  15. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  17. Gabor analysis of auditory midbrain receptive fields: spectro-temporal and binaural composition.

    Science.gov (United States)

    Qiu, Anqi; Schreiner, Christoph E; Escabí, Monty A

    2003-07-01

    The spectro-temporal receptive field (STRF) is a model representation of the excitatory and inhibitory integration area of auditory neurons. Recently it has been used to study spectral and temporal aspects of monaural integration in auditory centers. Here we report the properties of monaural STRFs and the relationship between ipsi- and contralateral inputs to neurons of the central nucleus of cat inferior colliculus (ICC) of cats. First, we use an optimal singular-value decomposition method to approximate auditory STRFs as a sum of time-frequency separable Gabor functions. This procedure extracts nine physiologically meaningful parameters. The STRFs of approximately 60% of collicular neurons are well described by a time-frequency separable Gabor STRF model, whereas the remaining neurons exhibited obliquely oriented or multiple excitatory/inhibitory subfields that require a nonseparable Gabor fitting procedure. Parametric analysis reveals distinct spectro-temporal tradeoffs in receptive field size and modulation filtering resolution. Comparisons between an identical model used to study spatio-temporal integration areas of visual neurons further shows that auditory and visual STRFs share numerous structural properties. We then use the Gabor STRF model to compare quantitatively receptive field properties of contra- and ipsilateral inputs to the ICC. We show that most interaural STRF parameters are highly correlated bilaterally. However, the spectral and temporal phases of ipsi- and contralateral STRFs often differ significantly. This suggests that activity originating from each ear share various spectro-temporal response properties such as their temporal delay, bandwidth, and center frequency but have shifted or interleaved patterns of excitation and inhibition. These differences in converging monaural receptive fields expand binaural processing capacity beyond interaural time and intensity aspects and may enable colliculus neurons to detect disparities in the spectro-temporal

  18. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  19. Prior auditory information shapes visual category-selectivity in ventral occipito-temporal cortex.

    Science.gov (United States)

    Adam, Ruth; Noppeney, Uta

    2010-10-01

    Objects in our natural environment generate signals in multiple sensory modalities. This fMRI study investigated the influence of prior task-irrelevant auditory information on visually-evoked category-selective activations in the ventral occipito-temporal cortex. Subjects categorized pictures as landmarks or animal faces, while ignoring the preceding congruent or incongruent sound. Behaviorally, subjects responded slower to incongruent than congruent stimuli. At the neural level, the lateral and medial prefrontal cortices showed increased activations for incongruent relative to congruent stimuli consistent with their role in response selection. In contrast, the parahippocampal gyri combined visual and auditory information additively: activation was greater for visual landmarks than animal faces and landmark-related sounds than animal vocalizations resulting in increased parahippocampal selectivity for congruent audiovisual landmarks. Effective connectivity analyses showed that this amplification of visual landmark-selectivity was mediated by increased negative coupling of the parahippocampal gyrus with the superior temporal sulcus for congruent stimuli. Thus, task-irrelevant auditory information influences visual object categorization at two stages. In the ventral occipito-temporal cortex auditory and visual category information are combined additively to sharpen visual category-selective responses. In the left inferior frontal sulcus, as indexed by a significant incongruency effect, visual and auditory category information are integrated interactively for response selection. Copyright 2010 Elsevier Inc. All rights reserved.

  20. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    Science.gov (United States)

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    Science.gov (United States)

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2013-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, and found systematic depth dependencies in responses to second-and-later noise bursts in slow (1–10 bursts/s) trains of noise bursts. At all depths, responses to noise bursts within a train usually decreased with increasing train rate; however, the rolloff with increasing train rate occurred at faster rates in more superficial layers. Moreover, in some recordings from mid-to-superficial layers, responses to noise bursts within a 3–4 bursts/s train were stronger than responses to noise bursts in slower trains. This non-monotonicity with train rate was especially pronounced in more superficial layers of the anterior auditory field, where responses to noise bursts within the context of a slow train were sometimes even stronger than responses to the noise burst at train onset. These findings may reflect depth dependence in suppression and recovery of cortical activity following a stimulus, which we suggest could arise from laminar differences in synaptic depression at feedforward and recurrent synapses. PMID:21900562

  2. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  3. Processamento auditivo em indivíduos com epilepsia de lobo temporal Auditory processing in patients with temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Juliana Meneguello

    2006-08-01

    Full Text Available A epilepsia do lobo temporal ocasiona descargas elétricas excessivas onde a via auditiva tem sua estação final. É uma das formas mais comuns e de mais difícil controle da doença. O correto processamento dos estímulos auditivos necessita da integridade anatômica e funcional de todas as estruturas envolvidas na via auditiva. OBJETIVO: Verificar o Processamento Auditivo de pacientes portadores de epilepsia do lobo temporal quanto aos mecanismos de discriminação de sons em seqüência e de padrões tonais, discriminação da direção da fonte sonora e atenção seletiva para sons verbais e não-verbais. MÉTODO: Foram avaliados oito indivíduos com epilepsia do lobo temporal confirmada e com foco restrito a essa região, através dos testes auditivos especiais: Teste de Localização Sonora, Teste de Padrão de Duração, Teste Dicótico de Dígitos e Teste Dicótico Não-Verbal. O seu desempenho foi comparado ao de indivíduos sem alteração neurológica (estudo caso-controle. RESULTADO: Os sujeitos com epilepsia do lobo temporal apresentaram desempenho semelhante aos do grupo controle quanto ao mecanismo de discriminação da direção da fonte sonora e desempenho inferior quanto aos demais mecanismos avaliados. CONCLUSÃO: Indivíduos com epilepsia do lobo temporal apresentaram maior prejuízo no processamento auditivo que os sem danos corticais, de idades semelhantes.Temporal epilepsy, one of the most common presentation of this pathology, causes excessive electrical discharges in the area where we have the final station of the auditory pathway. Both the anatomical and functional integrity of the auditory pathway structures are essential for the correct processing of auditory stimuli. AIM: to check the Auditory Processing in patients with temporal lobe epilepsy regarding the auditory mechanisms of discrimination from sequential sounds and tone patterns, discrimination of the sound source direction and selective attention to verbal

  4. Effect of passive smoking on auditory temporal resolution in children.

    Science.gov (United States)

    Durante, Alessandra Spada; Massa, Beatriz; Pucci, Beatriz; Gudayol, Nicolly; Gameiro, Marcella; Lopes, Cristiane

    2017-06-01

    To determine the effect of passive smoking on auditory temporal resolution in primary school children, based on the hypothesis that individuals who are exposed to smoking exhibit impaired performance. Auditory temporal resolution was evaluated using the Gaps In Noise (GIN) test. Exposure to passive smoking was assessed by measuring nicotine metabolite (cotinine) excreted in the first urine of the day. The study included 90 children with mean age of 10.2 ± 0.1 years old from a public school in São Paulo. Participants were divided into two groups: a study group, comprising 45 children exposed to passive smoking (cotinine > 5 ng/mL); and a control group, constituting 45 children who were not exposed to passive smoking. All participants had normal audiometry and immittance test results. Statistically significant differences (p passive smoking had poorer performance both in terms of thresholds and correct responses percentage on auditory temporal resolution assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Spatial Grouping Determines Temporal Integration

    Science.gov (United States)

    Hermens, Frouke; Scharnowski, Frank; Herzog, Michael H.

    2009-01-01

    To make sense out of a continuously changing visual world, people need to integrate features across space and time. Despite more than a century of research, the mechanisms of features integration are still a matter of debate. To examine how temporal and spatial integration interact, the authors measured the amount of temporal fusion (a measure of…

  6. Anatomical pathways for auditory memory II: Information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Directory of Open Access Journals (Sweden)

    Monica eMunoz-Lopez

    2015-05-01

    Full Text Available Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 minutes. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 seconds. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys’ auditory memory performance. It is possible, therefore, that the anatomical pathways differ. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC. We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG, and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY and anterograde (10% BDA 10,000 MW tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex, and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  7. Frontal and superior temporal auditory processing abnormalities in schizophrenia.

    Science.gov (United States)

    Chen, Yu-Han; Edgar, J Christopher; Huang, Mingxiong; Hunter, Michael A; Epstein, Emerson; Howell, Breannan; Lu, Brett Y; Bustillo, Juan; Miller, Gregory A; Cañive, José M

    2013-01-01

    Although magnetoencephalography (MEG) studies show superior temporal gyrus (STG) auditory processing abnormalities in schizophrenia at 50 and 100 ms, EEG and corticography studies suggest involvement of additional brain areas (e.g., frontal areas) during this interval. Study goals were to identify 30 to 130 ms auditory encoding processes in schizophrenia (SZ) and healthy controls (HC) and group differences throughout the cortex. The standard paired-click task was administered to 19 SZ and 21 HC subjects during MEG recording. Vector-based Spatial-temporal Analysis using L1-minimum-norm (VESTAL) provided 4D maps of activity from 30 to 130 ms. Within-group t-tests compared post-stimulus 50 ms and 100 ms activity to baseline. Between-group t-tests examined 50 and 100 ms group differences. Bilateral 50 and 100 ms STG activity was observed in both groups. HC had stronger bilateral 50 and 100 ms STG activity than SZ. In addition to the STG group difference, non-STG activity was also observed in both groups. For example, whereas HC had stronger left and right inferior frontal gyrus activity than SZ, SZ had stronger right superior frontal gyrus and left supramarginal gyrus activity than HC. Less STG activity was observed in SZ than HC, indicating encoding problems in SZ. Yet auditory encoding abnormalities are not specific to STG, as group differences were observed in frontal and SMG areas. Thus, present findings indicate that individuals with SZ show abnormalities in multiple nodes of a concurrently activated auditory network.

  8. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  9. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up...... and the physiological mechanisms of binaural processing in mammals; integration of the different stimulus features into auditory scene analysis; physiological mechanisms related to the formation of auditory objects; speech perception; and limitations of auditory perception resulting from hearing disorders....

  10. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  11. The spatio-temporal profile of multisensory integration.

    Science.gov (United States)

    Starke, Johanna; Ball, Felix; Heinze, Hans-Jochen; Noesselt, Toemme

    2017-10-23

    Task-irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower- and higher-intensity sounds were paired with a non-informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co-stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low-intensity audiovisual stimuli which scaled with subject-specific enhancement in perceptual sensitivity. Concordantly, a modulation of event-related potentials could already be observed over frontal electrodes at an early latency (30-80 ms), which again scaled with subject-specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain-behaviour correspondence. Hence, the latency of the corresponding fMRI-EEG brain-behaviour modulation points at an early interplay of visual and auditory signals in low-level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher-intensity auditory stimuli were also elevated by visual co-stimulation (in the absence of any behavioural effect) suggesting a general, intensity-independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  13. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    of the rate of clicks in calls. The majority of neurons (85%) were selective for click rates, and this selectivity remained unchanged over sound levels 10 to 20 dB above threshold. Selective neurons give phasic, tonic, or adapting responses to tone bursts and click trains. Some algorithms that could compute...... of auditory neurons in the laminar nucleus of the torus semicircularis (TS) of X. laevis specializes in encoding vocalization click rates. We recorded single TS units while pure tones, natural calls, and synthetic clicks were presented directly to the tympanum via a vibration-stimulation probe. Synthesized...... click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...

  14. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    Science.gov (United States)

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  15. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  16. Spatio-temporal source cluster analysis reveals fronto-temporal auditory change processing differences within a shared autistic and schizotypal trait phenotype

    Directory of Open Access Journals (Sweden)

    Talitha C. Ford

    2017-01-01

    These data demonstrate a deficit in right fronto-temporal processing of an auditory change for those with more of the shared SD phenotype, indicating that right fronto-temporal auditory processing may be associated with psychosocial functioning.

  17. Repetition suppression in auditory-motor regions to pitch and temporal structure in music.

    Science.gov (United States)

    Brown, Rachel M; Chen, Joyce L; Hollinger, Avrum; Penhune, Virginia B; Palmer, Caroline; Zatorre, Robert J

    2013-02-01

    Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.

  18. The cascaded nature of lexical selection and integration in auditory sentence processing

    NARCIS (Netherlands)

    Brink, D. van den; Brown, C.M.; Hagoort, P.

    2006-01-01

    An event-related brain potential experiment was carried out to investigate the temporal relationship between lexical selection and the semantic integration in auditory sentence processing. Participants were presented with spoken sentences that ended with a word that was either semantically congruent

  19. Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex

    Directory of Open Access Journals (Sweden)

    Arne Freerk Meyer

    2014-12-01

    Full Text Available Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF estimates with characteristic temporal resolution 5 s to 30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.

  20. Segregation and integration of auditory streams when listening to multi-part music.

    Science.gov (United States)

    Ragert, Marie; Fairhurst, Merle T; Keller, Peter E

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams

  1. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  2. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study.

    Science.gov (United States)

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (ptest and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.

  3. Temporal integration of consecutive tones into synthetic vowels demonstrates perceptual assembly in audition

    NARCIS (Netherlands)

    Saija, Jefta D.; Andringa, Tjeerd C.; Başkent, Deniz; Akyürek, Elkan G.

    Temporal integration is the perceptual process combining sensory stimulation over time into longer percepts that can span over 10 times the duration of a minimally detectable stimulus. Particularly in the auditory domain, such "long-term" temporal integration has been characterized as a relatively

  4. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  5. Sensory-to-motor integration during auditory repetition: a combined fMRI and lesion study.

    Science.gov (United States)

    Parker Jones, 'ōiwi; Prejawa, Susan; Hope, Thomas M H; Oberhuber, Marion; Seghier, Mohamed L; Leff, Alex P; Green, David W; Price, Cathy J

    2014-01-01

    The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing.

  6. Sensory-to-motor integration during auditory repetition: A combined fMRI and lesion study

    Directory of Open Access Journals (Sweden)

    ‘Ōiwi eParker Jones

    2014-01-01

    Full Text Available The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction, referred to as Spt (Sylvian-parietal-temporal region, reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in 8 patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e. conduction aphasia. All 8 patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the 8 patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing.

  7. Effect of Unilateral Temporal Lobe Resection on Short‐Term Memory for Auditory Object and Sound Location

    National Research Council Canada - National Science Library

    LANCELOT, CÉLINE; SAMSON, SÉVERINE; AHAD, PIERRE; BAULAC, MICHEL

    2003-01-01

    A bstract : To investigate auditory spatial and nonspatial short‐term memory, a sound location discrimination task and an auditory object discrimination task were used in patients with medial temporal lobe resection...

  8. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.

  9. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  10. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required “active” discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral “auditory” alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519

  11. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.

  12. Deficit of auditory temporal processing in children with dyslexia-dysgraphia

    Directory of Open Access Journals (Sweden)

    Sima Tajik

    2012-12-01

    Full Text Available Background and Aim: Auditory temporal processing reveals an important aspect of auditory performance, in which a deficit can prevent the child from speaking, language learning and reading. Temporal resolution, which is a subgroup of temporal processing, can be evaluated by gap-in-noise detection test. Regarding the relation of auditory temporal processing deficits and phonologic disorder of children with dyslexia-dysgraphia, the aim of this study was to evaluate these children with the gap-in-noise (GIN test.Methods: The gap-in-noise test was performed on 28 normal and 24 dyslexic-dysgraphic children, at the age of 11-12 years old. Mean approximate threshold and percent of corrected answers were compared between the groups.Results: The mean approximate threshold and percent of corrected answers of the right and left ear had no significant difference between the groups (p>0.05. The mean approximate threshold of children with dyslexia-dysgraphia (6.97 ms, SD=1.09 was significantly (p<0.001 more than that of the normal group (5.05 ms, SD=0.92. The mean related frequency of corrected answers (58.05, SD=4.98% was less than normal group (69.97, SD=7.16% (p<0.001.Conclusion: Abnormal temporal resolution was found in children with dyslexia-dysgraphia based on gap-in-noise test. While the brainstem and auditory cortex are responsible for auditory temporal processing, probably the structural and functional differences of these areas in normal and dyslexic-dysgraphic children lead to abnormal coding of auditory temporal information. As a result, auditory temporal processing is inevitable.

  13. Auditory temporal processing deficits and language disorders in patients with neurofibromatosis type 1.

    Science.gov (United States)

    Batista, Pollyanna Barros; Lemos, Stela Maris Aguiar; Rodrigues, Luiz Oswaldo Carneiro; de Rezende, Nilton Alves

    2014-01-01

    Previous findings from a case report led to the argument of whether other patients with neurofibromatosis type 1 (NF1) may have abnormal central auditory function, particularly auditory temporal processing. We hypothesized that it is associated with language and learning disabilities in this population. The aim of this study was to measure central auditory temporal function in NF1 patients and correlate it with the results of language evaluation tests. A descriptive/comparative study including 25 NF1 individuals and 22 healthy controls compared their performances on audiometric evaluation and auditory behavioral testing (Sequential Verbal Memory, Sequential Non-Verbal Memory, Frequency Pattern, Duration Pattern, and Gaps in Noise Tests). To assess language performance, two tests (phonological and syntactic awareness) were also conducted. The study showed that all participants had normal peripheral acoustic hearing. Differences were found between the NF1 and control groups in the temporal auditory processing tests [Sequential Verbal Memory (P=0.009), Sequential Non-Verbal Memory (P=0.028), Frequency Patterns (P=0.001), Duration Patterns (P=0.000), and Gaps in Noise (P=0.000)] and in language tests. The results of Pearson correlation analysis demonstrated the presence of positive correlations between the phonological awareness test and Frequency Patterns humming (r=0.560, P=0.001), Frequency Patterns labeling (r=0.415, P=0.022) and Duration Pattern humming (r=0.569, P=0.001). These results suggest that the neurofibromin deficiency found in NF1 patients is associated with auditory temporal processing deficits, which may contribute to the cognitive impairment, learning disabilities, and attention deficits that are common in this disorder. The reader will be able to: (1) describe the auditory temporal processing in patients with neurofibromatosis type 1; and (2) describe the impact of the auditory temporal deficits in language in this population. Copyright © 2014

  14. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  15. Auditory Spectral Integration in the Perception of Static Vowels

    Science.gov (United States)

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  16. Spectral vs. Temporal Auditory Processing in Specific Language Impairment: A Developmental ERP Study

    Science.gov (United States)

    Ceponiene, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A.; Townsend, J.

    2009-01-01

    Pre-linguistic sensory deficits, especially in "temporal" processing, have been implicated in developmental language impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral…

  17. Cortical Auditory-Evoked Responses in Preterm Neonates: Revisited by Spectral and Temporal Analyses.

    Science.gov (United States)

    Kaminska, A; Delattre, V; Laschet, J; Dubois, J; Labidurie, M; Duval, A; Manresa, A; Magny, J-F; Hovhannisyan, S; Mokhtari, M; Ouss, L; Boissel, A; Hertz-Pannier, L; Sintsov, M; Minlebaev, M; Khazipov, R; Chiron, C

    2017-08-11

    Characteristic preterm EEG patterns of "Delta-brushes" (DBs) have been reported in the temporal cortex following auditory stimuli, but their spatio-temporal dynamics remains elusive. Using 32-electrode EEG recordings and co-registration of electrodes' position to 3D-MRI of age-matched neonates, we explored the cortical auditory-evoked responses (AERs) after 'click' stimuli in 30 healthy neonates aged 30-38 post-menstrual weeks (PMW). (1) We visually identified auditory-evoked DBs within AERs in all the babies between 30 and 33 PMW and a decreasing response rate afterwards. (2) The AERs showed an increase in EEG power from delta to gamma frequency bands over the middle and posterior temporal regions with higher values in quiet sleep and on the right. (3) Time-frequency and averaging analyses showed that the delta component of DBs, which negatively peaked around 550 and 750 ms over the middle and posterior temporal regions, respectively, was superimposed with fast (alpha-gamma) oscillations and corresponded to the late part of the cortical auditory-evoked potential (CAEP), a feature missed when using classical CAEP processing. As evoked DBs rate and AERs delta to alpha frequency power decreased until full term, auditory-evoked DBs are thus associated with the prenatal development of auditory processing and may suggest an early emerging hemispheric specialization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  19. The Temporal Window of Multisensory Integration under Competing Circumstances

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    2011-10-01

    Full Text Available Our brain tends to integrate information from different sensory modalities when presented within the so-called temporal window of integration. Whereas other studies investigated this window using a single audio-visual event, we examine the effect of competing spatio-temporal circumstances. Participants saw nineteen luminance-modulating discs while hearing an amplitude modulating tone. The luminance-modulation of each disc had a unique temporal phase (between −380 and 380 ms; steps of 40 ms, one of which was synchronized with the tone. Participants were instructed to identify which disc was synchronized with the tone. The waveforms of auditory and visual modulations were either both sinusoidal or square. Under sine-wave conditions, participants selected disks with phase offsets indistinguishable from guessing. In contrast, under square-wave conditions, participants selected the correct disc (phase = 0 ms with a high degree of accuracy. When errors did occur, they tended to decrease with temporal phase separation, yielding an integration window of ∼140ms. These results indicate reliable AV integration depends upon transient signals. Interestingly, spatial analysis of confusion density profiles indicate transient elements left and right of fixation are integrated more efficiently than elements above or below. This anisotropy suggests that the temporal window of AV integration is constrained by intra-hemsipheric competition.

  20. Lateralization of Auditory rhythm length in temporal lobe lessions

    NARCIS (Netherlands)

    Alpherts, W.C.J.; Vermeulen, J.; Franken, M.L.O.; Hendriks, M.P.H.; Veelen, C.W.M. van; Rijen, P.C. van

    2002-01-01

    In the visual modality, short rhythmic stimuli ha c been proven to he better processed (sequentially) by the left hemisphere, while longer rhythms appear to he better (holistically) processed by the right hemisphere. This study was set up to see it the same holds in the auditory modality. The rhythm

  1. Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses.

    Science.gov (United States)

    Scheidt, Ryan E; Kale, Sushrut; Heinz, Michael G

    2010-10-01

    Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus-time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids.

  2. The effects of voluntary movements on auditory-haptic and haptic-haptic temporal order judgments.

    Science.gov (United States)

    Frissen, Ilja; Ziat, Mounia; Campion, Gianni; Hayward, Vincent; Guastavino, Catherine

    2012-10-01

    In two experiments we investigated the effects of voluntary movements on temporal haptic perception. Measures of sensitivity (JND) and temporal alignment (PSS) were obtained from temporal order judgments made on intermodal auditory-haptic (Experiment 1) or intramodal haptic (Experiment 2) stimulus pairs under three movement conditions. In the baseline, static condition, the arm of the participants remained stationary. In the passive condition, the arm was displaced by a servo-controlled motorized device. In the active condition, the participants moved voluntarily. The auditory stimulus was a short, 500Hz tone presented over headphones and the haptic stimulus was a brief suprathreshold force pulse applied to the tip of the index finger orthogonally to the finger movement. Active movement did not significantly affect discrimination sensitivity on the auditory-haptic stimulus pairs, whereas it significantly improved sensitivity in the case of the haptic stimulus pair, demonstrating a key role for motor command information in temporal sensitivity in the haptic system. Points of subjective simultaneity were by-and-large coincident with physical simultaneity, with one striking exception in the passive condition with the auditory-haptic stimulus pair. In the latter case, the haptic stimulus had to be presented 45ms before the auditory stimulus in order to obtain subjective simultaneity. A model is proposed to explain the discrimination performance. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Temporal Feature Integration for Music Organisation

    OpenAIRE

    Meng, Anders; Larsen, Jan; Hansen, Lars Kai

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisa...

  4. Temporal processing and long-latency auditory evoked potential in stutterers

    Directory of Open Access Journals (Sweden)

    Raquel Prestes

    Full Text Available Abstract Introduction: Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. Objective: To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. Methods: The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n = 20 and non-stutters (n = 21, compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Results: Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Conclusion: Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components.

  5. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  6. Temporal integration and instrumental conditioned reinforcement

    OpenAIRE

    Thrailkill, Eric A.; Shahan, Timothy A.

    2014-01-01

    Stimuli associated with primary reinforcement for instrumental behavior are widely believed to acquire the capacity to function as conditioned reinforcers via Pavlovian conditioning. Some Pavlovian conditioning studies suggest that animals learn the important temporal relations between stimuli and integrate such temporal information over separate experiences to form a temporal map. The present experiment examined whether Pavlovian conditioning can establish a positive instrumental conditioned...

  7. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  8. Probing the time course of head-motion cues integration during auditory scene analysis.

    Science.gov (United States)

    Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  9. Probing the time course of head-motion cues integration during auditory scene analysis

    Directory of Open Access Journals (Sweden)

    Hirohito M. Kondo

    2014-06-01

    Full Text Available The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and report their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues, we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  10. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  11. Syntactic and auditory spatial processing in the human temporal cortex: an MEG study.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Hahne, Anja; Schröger, Erich; Friederici, Angela D

    2011-07-15

    Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A physiologically inspired model of auditory stream segregation based on a temporal coherence analysis

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2012-01-01

    The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422......-438, (2008)] was used as a front end of a model for auditory stream segregation. A temporal coherence analysis [M. Elhilali, C. Ling, C. Micheyl, A. J. Oxenham and S. Shamma, Neuron. 61, 317-329, (2009)] was applied at the output of the preprocessing, using the coherence across tonotopic channels to group...

  14. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  15. Local field potential correlates of auditory working memory in primate dorsal temporal pole.

    Science.gov (United States)

    Bigelow, James; Ng, Chi-Wing; Poremba, Amy

    2016-06-01

    Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special

  16. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Temporal Feature Integration for Music Organisation

    DEFF Research Database (Denmark)

    Meng, Anders

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods...... for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisation. Human evaluations of these, have been obtained to access the subjectivity on the datasets. Temporal...... feature integration has been used for ranking various short-time features at different time-scales. This include short-time features such as the Mel frequency cepstral coefficients (MFCC), linear predicting coding coefficients (LPC) and various MPEG-7 short-time features. The ‘consensus sensitivity...

  18. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  19. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  20. Selective integration of auditory-visual looming cues by humans.

    Science.gov (United States)

    Cappe, Céline; Thut, Gregor; Romei, Vincenzo; Murray, Micah M

    2009-03-01

    An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.

  1. Temporal integration of consecutive tones into synthetic vowels demonstrates perceptual assembly in audition.

    Science.gov (United States)

    Saija, Jefta D; Andringa, Tjeerd C; Başkent, Deniz; Akyürek, Elkan G

    2014-04-01

    Temporal integration is the perceptual process combining sensory stimulation over time into longer percepts that can span over 10 times the duration of a minimally detectable stimulus. Particularly in the auditory domain, such "long-term" temporal integration has been characterized as a relatively simple function that acts chiefly to bridge brief input gaps, and which places integrated stimuli on temporal coordinates while preserving their temporal order information. These properties are not observed in visual temporal integration, suggesting they might be modality specific. The present study challenges that view. Participants were presented with rapid series of successive tone stimuli, in which two separate, deviant target tones were to be identified. Critically, the target tone pair would be perceived as a single synthetic vowel if they were interpreted to be simultaneous. During the task, despite that the targets were always sequential and never actually overlapping, listeners frequently reported hearing just one sound, the synthetic vowel, rather than two successive tones. The results demonstrate that auditory temporal integration, like its visual counterpart, truly assembles a percept from sensory inputs across time, and does not just summate time-ordered (identical) inputs or fill gaps therein. This finding supports the idea that temporal integration is a universal function of the human perceptual system.

  2. Tracking cortical entrainment in neural activity: Auditory processes in human temporal cortex

    Directory of Open Access Journals (Sweden)

    Andrew eThwaites

    2015-02-01

    Full Text Available A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons, varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0 of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS towards the temporal pole.

  3. Spatial and Temporal High Processing of Visual and Auditory Stimuli in Cervical Dystonia.

    Science.gov (United States)

    Chillemi, Gaetana; Calamuneri, Alessandro; Morgante, Francesca; Terranova, Carmen; Rizzo, Vincenzo; Girlanda, Paolo; Ghilardi, Maria Felice; Quartarone, Angelo

    2017-01-01

    Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.

  4. Temporal feature integration for music genre classification

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan

    2007-01-01

    , but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music genre classification. This model gives two different feature sets, the diagonal autoregressive (DAR......) and multivariate autoregressive (MAR) features which are compared against the baseline mean-variance as well as two other temporal feature integration techniques. Reproducibility in performance ranking of temporal feature integration methods were demonstrated using two data sets with five and eleven music genres...

  5. Temporal Coding of Periodicity Pitch in the Auditory System: An Overview

    Directory of Open Access Journals (Sweden)

    Peter Cariani

    1999-01-01

    Population-wide inter-spike interval distributions are constructed by summing together intervals from the observed responses of many single Type I auditory nerve fibers. Features in such distributions correspond closely with pitches that are heard by human listeners. The most common all-order interval present in the auditory nerve array almost invariably corresponds to the pitch frequency, whereas the relative fraction of pitchrelated intervals amongst all others qualitatively corresponds to the strength of the pitch. Consequently, many diverse aspects of pitch perception are explained in terms of such temporal representations. Similar stimulus-driven temporal discharge patterns are observed in major neuronal populations of the cochlear nucleus. Population-interval distributions constitute an alternative time-domain strategy for representing sensory information that complements spatially organized sensory maps. Similar autocorrelation-like representations are possible in other sensory systems, in which neural discharges are time-locked to stimulus waveforms.

  6. Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales.

    Directory of Open Access Journals (Sweden)

    Xiangbin Teng

    2017-11-01

    Full Text Available Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4-7 Hz and gamma band ranges (31-45 Hz but, contrary to expectation, not at the timescale corresponding to alpha (8-12 Hz, which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.

  7. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  8. Temporal integration and instrumental conditioned reinforcement.

    Science.gov (United States)

    Thrailkill, Eric A; Shahan, Timothy A

    2014-09-01

    Stimuli associated with primary reinforcement for instrumental behavior are widely believed to acquire the capacity to function as conditioned reinforcers via Pavlovian conditioning. Some Pavlovian conditioning studies suggest that animals learn the important temporal relations between stimuli and integrate such temporal information over separate experiences to form a temporal map. The present experiment examined whether Pavlovian conditioning can establish a positive instrumental conditioned reinforcer through such temporal integration. Two groups of rats received either delay or trace appetitive conditioning in which a neutral stimulus predicted response-independent food deliveries (CS1→US). Both groups then experienced one session of backward second-order conditioning of the training CS1 and a novel CS2 (CS1-CS2 pairing). Finally, the ability of CS2 to function as a conditioned reinforcer for a new instrumental response (leverpressing) was assessed. Consistent with the previous demonstrations of temporal integration in fear conditioning, a CS2 previously trained in a trace-conditioning protocol served as a better instrumental conditioned reinforcer after backward second-order conditioning than did a CS2 previously trained in a delay protocol. These results suggest that an instrumental conditioned reinforcer can be established via temporal integration and raise challenges for existing quantitative accounts of instrumental conditioned reinforcement.

  9. The temporal window of audio-tactile integration in speech perception

    OpenAIRE

    Gick, Bryan; Ikegami, Yoko; Derrick, Donald

    2010-01-01

    Asynchronous cross-modal information is integrated asymmetrically in audio-visual perception. To test whether this asymmetry generalizes across modalities, auditory (aspirated “pa” and unaspirated “ba” stops) and tactile (slight, inaudible, cutaneous air puffs) signals were presented synchronously and asynchronously. Results were similar to previous AV studies: the temporal window of integration for the enhancement effect (but not the interference effect) was asymmetrical, allowing up to 200 ...

  10. The effect of temporal asynchrony on the multisensory integration of letters and speech sounds.

    Science.gov (United States)

    van Atteveldt, Nienke M; Formisano, Elia; Blomert, Leo; Goebel, Rainer

    2007-04-01

    Temporal proximity is a critical determinant for cross-modal integration by multisensory neurons. Information content may serve as an additional binding factor for more complex or less natural multisensory information. Letters and speech sounds, which form the basis of literacy acquisition, are not naturally related but associated through explicit learning. We investigated the relative importance of temporal proximity and information content on the integration of letters and speech sounds by manipulating both factors within the same functional magnetic resonance imaging (fMRI) design. The results reveal significant interactions between temporal proximity and content congruency in anterior and posterior auditory association cortex, indicating that temporal synchrony is critical for the integration of letters and speech sounds. The temporal profiles for multisensory integration in the auditory association cortex resemble those demonstrated for single multisensory neurons in different brain structures and animal species. This similarity suggests that basic neural integration rules apply to the binding of multisensory information that is not naturally related but overlearned during literacy acquisition. Furthermore, the present study shows the suitability of fMRI to study temporal aspects of multisensory neural processing.

  11. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  12. Hemispheric asymmetries for visual and auditory temporal processing: an evoked potential study.

    Science.gov (United States)

    Nicholls, Michael E R; Gora, John; Stough, Con K K

    2002-04-01

    Lateralization for temporal processing was investigated using evoked potentials to an auditory and visual gap detection task in 12 dextral adults. The auditory stimuli consisted of 300-ms bursts of white noise, half of which contained an interruption lasting 4 or 6 ms. The visual stimuli consisted of 130-ms flashes of light, half of which contained a gap lasting 6 or 8 ms. The stimuli were presented bilaterally to both ears or both visual fields. Participants made a forced two-choice discrimination using a bimanual response. Manipulations of the task had no effect on the early evoked components. However, an effect was observed for a late positive component, which occurred approximately 300-400 ms following gap presentation. This component tended to be later and lower in amplitude for the more difficult stimulus conditions. An index of the capacity to discriminate gap from no-gap stimuli was gained by calculating the difference waveform between these conditions. The peak of the difference waveform was delayed for the short-gap stimuli relative to the long-gap stimuli, reflecting decreased levels of difficulty associated with the latter stimuli. Topographic maps of the difference waveforms revealed a prominence over the left hemisphere. The visual stimuli had an occipital parietal focus whereas the auditory stimuli were parietally centered. These results confirm the importance of the left hemisphere for temporal processing and demonstrate that it is not the result of a hemispatial attentional bias or a peripheral sensory asymmetry.

  13. Effects of tonotopicity, adaptation, modulation tuning, and temporal coherence in “primitive” auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2014-01-01

    The perceptual organization of two-tone sequences into auditory streams was investigated using a modeling framework consisting of an auditory pre-processing front end [Dau et al., J. Acoust. Soc. Am. 102, 2892–2905 (1997)] combined with a temporal coherence-analysis back end [Elhilali et al......., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... study supported the hypothesis that forward masking enhances the ability to perceptually segregate spectrally close tone sequences. Furthermore, the modeling suggested that effects of neural adaptation and processing though modulation-frequency selective filters may enhance the sensitivity to onset...

  14. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural....... The purpose of the present thesis is to develop a computational auditory processing model that accounts for a large variety of experimental data on CMR, in order to obtain a more thorough understanding of the basic processing principles underlying the processing of across-frequency modulations. The second...... grouping can influence the results in conditions where the processing in the auditory system is dominated by across-channel comparisons. Overall, this thesis provides insights into the specific mechanisms involved in the perception of comodulated sounds. The results are important as a basis for future...

  15. Multisensory temporal integration: task and stimulus dependencies.

    Science.gov (United States)

    Stevenson, Ryan A; Wallace, Mark T

    2013-06-01

    The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the importance of temporal factors in multisensory integration under both normal and atypical circumstances such as autism and dyslexia, the TBW has been measured with a variety of experimental protocols that differ according to criterion, task, and stimulus type, making comparisons across experiments difficult. In the current study, we attempt to elucidate the role that these various factors play in the measurement of this important construct. The results show a strong effect of stimulus type, with the TBW assessed with speech stimuli being both larger and more symmetrical than that seen using simple and complex non-speech stimuli. These effects are robust across task and statistical criteria and are highly consistent within individuals, suggesting substantial overlap in the neural and cognitive operations that govern multisensory temporal processes.

  16. Auditory-motor integration during fast repetition: the neuronal correlates of shadowing.

    Science.gov (United States)

    Peschke, C; Ziegler, W; Kappes, J; Baumgaertner, A

    2009-08-01

    This fMRI study examined which structures of a proposed dorsal stream system are involved in the auditory-motor integration during fast overt repetition. We used a shadowing task which requires immediate repetition of an auditory-verbal input and is supposed to elicit unconscious imitation effects of phonologically irrelevant speech parameters. Subjects' responses were recorded in the scanner. To examine automated auditory-motor mapping of speech gestures of others onto one's own speech production system we contrasted the shadowing of pseudowords produced by multiple speakers (men, women, and children) with the shadowing of pseudowords produced by a single speaker. Furthermore, we asked whether behavioral variables predicted changes in functional activation during shadowing. Shadowing multiple speakers compared to a single speaker elicited increased bilateral activation predominantly in the superior temporal sulci. These regions may mediate acoustic-phonetic speaker normalization in preparation of a translation of perceptual into motor information. Additional activation in Broca's area and the thalamus may reflect motor effects of the adaptation to multiple speaker models. Item-wise correlational analyses of response latencies with BOLD signal changes indicated that longer latencies were associated with increased activation in the left parietal operculum, suggesting that this area plays a central role in the actual transfer of auditory-verbal information to speech motor representations. A multiple regression of behavioral with imaging data showed activation in a right inferior parietal area near the temporo-parietal boundary which correlated positively with the degree of speech rate imitation and negatively with response latency. This activation may be attributable to attentional and/or paralinguistic processes.

  17. Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing

    Directory of Open Access Journals (Sweden)

    Anton Ludwig Beer

    2013-02-01

    Full Text Available Functional magnetic resonance imaging (MRI showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS, the lateral superior temporal gyrus (lSTG, and the extrastriate body area (EBA. A region-of-interest analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital, the inferior-occipital cortex, and the superior temporal sulcus (STS. However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex.

  18. Multisensory temporal integration in autism spectrum disorders.

    Science.gov (United States)

    Stevenson, Ryan A; Siemann, Justin K; Schneider, Brittany C; Eberly, Haley E; Woynaroski, Tiffany G; Camarata, Stephen M; Wallace, Mark T

    2014-01-15

    The new DSM-5 diagnostic criteria for autism spectrum disorders (ASDs) include sensory disturbances in addition to the well-established language, communication, and social deficits. One sensory disturbance seen in ASD is an impaired ability to integrate multisensory information into a unified percept. This may arise from an underlying impairment in which individuals with ASD have difficulty perceiving the temporal relationship between cross-modal inputs, an important cue for multisensory integration. Such impairments in multisensory processing may cascade into higher-level deficits, impairing day-to-day functioning on tasks, such as speech perception. To investigate multisensory temporal processing deficits in ASD and their links to speech processing, the current study mapped performance on a number of multisensory temporal tasks (with both simple and complex stimuli) onto the ability of individuals with ASD to perceptually bind audiovisual speech signals. High-functioning children with ASD were compared with a group of typically developing children. Performance on the multisensory temporal tasks varied with stimulus complexity for both groups; less precise temporal processing was observed with increasing stimulus complexity. Notably, individuals with ASD showed a speech-specific deficit in multisensory temporal processing. Most importantly, the strength of perceptual binding of audiovisual speech observed in individuals with ASD was strongly related to their low-level multisensory temporal processing abilities. Collectively, the results represent the first to illustrate links between multisensory temporal function and speech processing in ASD, strongly suggesting that deficits in low-level sensory processing may cascade into higher-order domains, such as language and communication.

  19. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  20. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds.

    Science.gov (United States)

    Engineer, Crystal T; Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P

    Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Modulation of auditory evoked responses to spectral and temporal changes by behavioral discrimination training

    Directory of Open Access Journals (Sweden)

    Okamoto Hidehiko

    2009-12-01

    Full Text Available Abstract Background Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency. Cortical evoked responses like N1 and mismatch negativity (MMN triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG. We investigated (i how the auditory cortex reacts to pitch difference (in carrier frequency and changes in temporal features (modulation frequency of AM tones and (ii how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex. Results The results showed that, additionally to an improvement of the behavioral discrimination performance, discrimination training of carrier frequency changes significantly modulates the MMN and N1 response amplitudes after the training. This process was accompanied by an attention switch to the deviant stimulus after the training procedure identified by the occurrence of a P3a component. In contrast, the training in discrimination of modulation frequency was not sufficient to improve the behavioral discrimination performance and to alternate the cortical response (MMN to the modulation frequency change. The N1 amplitude, however, showed significant increase after and one week after the training. Similar to the training in carrier frequency discrimination, a long lasting

  2. Processing of speech temporal and spectral information by users of auditory brainstem implants and cochlear implants.

    Science.gov (United States)

    Azadpour, Mahan; McKay, Colette M

    2014-01-01

    Auditory brainstem implants (ABI) use the same processing strategy as was developed for cochlear implants (CI). However, the cochlear nucleus (CN), the stimulation site of ABIs, is anatomically and physiologically more complex than the auditory nerve and consists of neurons with differing roles in auditory processing. The aim of this study was to evaluate the hypotheses that ABI users are less able than CI users to access speech spectro-temporal information delivered by the existing strategies and that the sites stimulated by different locations of CI and ABI electrode arrays differ in encoding of temporal patterns in the stimulation. Six CI users and four ABI users of Nucleus implants with ACE processing strategy participated in this study. Closed-set perception of aCa syllables (16 consonants) and bVd words (11 vowels) was evaluated via experimental processing strategies that activated one, two, or four of the electrodes of the array in a CIS manner as well as subjects' clinical strategies. Three single-channel strategies presented the overall temporal envelope variations of the signal on a single-implant electrode located at the high-, medium-, and low-frequency regions of the array. Implantees' ability to discriminate within electrode temporal patterns of stimulation for phoneme perception and their ability to make use of spectral information presented by increased number of active electrodes were assessed in the single- and multiple-channel strategies, respectively. Overall percentages and information transmission of phonetic features were obtained for each experimental program. Phoneme perception performance of three ABI users was within the range of CI users in most of the experimental strategies and improved as the number of active electrodes increased. One ABI user performed close to chance with all the single and multiple electrode strategies. There was no significant difference between apical, basal, and middle CI electrodes in transmitting speech

  3. Posttraumatic Temporal Bone Meningocele Presenting as a Cystic Mass in the External Auditory Canal.

    Science.gov (United States)

    Alijani, Babak; Bagheri, Hamid Reza; Chabok, Shahrokh Yousefzadeh; Behzadnia, Hamid; Dehghani, Siavash

    2016-07-01

    Temporal bone meningoencephalic herniation may occur in head trauma. It is a rare condition with potentially dangerous complications. Several different routes for temporal bone meningoencephalocele have been proposed. An11-year-old boy with history of head trauma initially presented with a 9-months history of progressive right-sided hearing loss and facial weakness. The other complaint was formation of a cystic mass in the right external auditory canal. The patient underwent surgery via a mini middle cranial fossa craniotomy associated with a transmastoid approach. Although presenting symptoms can be subtle, early suspicion and confirmatory imaging aid in establishing the diagnosis. The combination of computed tomography and magnetic resonance imaging will help in proper preoperative diagnosis. The operation includes transmastoid, middle cranial fossa repair, or combination of both. The multilayer closure of bony defect is very important to avoid cerebrospinal fluid leak. Clinical manifestations, diagnosis, and surgical approaches for posttraumatic meningoencephaloceles arising in the head and neck region are briefly discussed.

  4. Echoic Memory: Investigation of Its Temporal Resolution by Auditory Offset Cortical Responses

    Science.gov (United States)

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms. PMID:25170608

  5. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  6. Temporal Proximity Promotes Integration of Overlapping Events.

    Science.gov (United States)

    Zeithamova, Dagmar; Preston, Alison R

    2017-08-01

    Events with overlapping elements can be encoded as two separate representations or linked into an integrated representation, yet we know little about the conditions that promote one form of representation over the other. Here, we tested the hypothesis that the proximity of overlapping events would increase the probability of integration. Participants first established memories for house-object and face-object pairs; half of the pairs were learned 24 hr before an fMRI session, and the other half 30 min before the session. During scanning, participants encoded object-object pairs that overlapped with the initial pairs acquired on the same or prior day. Participants were also scanned as they made inference judgments about the relationships among overlapping pairs learned on the same or different day. Participants were more accurate and faster when inferring relationships among memories learned on the same day relative to those acquired across days, suggesting that temporal proximity promotes integration. Evidence for reactivation of existing memories-as measured by a visual content classifier-was equivalent during encoding of overlapping pairs from the two temporal conditions. In contrast, evidence for integration-as measured by a mnemonic strategy classifier from an independent study [Richter, F. R., Chanales, A. J. H., & Kuhl, B. A. Predicting the integration of overlapping memories by decoding mnemonic processing states during learning. Neuroimage, 124, 323-335, 2016]-was greater for same-day overlapping events, paralleling the behavioral results. During inference itself, activation patterns further differentiated when participants were making inferences about events acquired on the same day versus across days. These findings indicate that temporal proximity of events promotes integration and further influences the neural mechanisms engaged during inference.

  7. Multisensory Temporal Integration in Autism Spectrum Disorders

    OpenAIRE

    Stevenson, Ryan A.; Siemann, Justin K.; Schneider, Brittany C.; Eberly, Haley E.; Woynaroski, Tiffany G.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    The new DSM-5 diagnostic criteria for autism spectrum disorders (ASDs) include sensory disturbances in addition to the well-established language, communication, and social deficits. One sensory disturbance seen in ASD is an impaired ability to integrate multisensory information into a unified percept. This may arise from an underlying impairment in which individuals with ASD have difficulty perceiving the temporal relationship between cross-modal inputs, an important cue for multisensory inte...

  8. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    Science.gov (United States)

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  9. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  10. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  11. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  12. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT).

    Science.gov (United States)

    Mohammad Esmaeilzadeh, Sahar; Sharifi, Shahla; Tayarani Niknezhad, Hamid

    2013-09-01

    Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT) is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT) and play therapy (PT). There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children. In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected. VT, MT, and PT enhance children's communication and language skills from an early age. Each method has a meaningful impact on hearing loss, so by integrating them we have a comprehensive method in order to facilitate communication and language learning. To achieve this goal, the article offers methods and techniques to perform AVT and MT integrated with PT leading to an approach which offers all advantages of these three types of therapy.

  13. An Auditory Integrational Problem with Associated Language Disability in an Adult Mental Patient

    Science.gov (United States)

    McGrew, Winifred C.

    1973-01-01

    The case history focuses on diagnostic and treatment procedures used to treat an institutionalized adult male found to have a disorder of auditory integration associated with severe language disability. (Author/LS)

  14. Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery.

    Science.gov (United States)

    Henry, Kenneth S; Heinz, Michael G

    2013-09-01

    People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. This article is part of a Special Issue entitled "Annual Reviews 2013". Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Neural Correlates of Temporal Auditory Processing in Developmental Dyslexia during German Vowel Length Discrimination: An fMRI Study

    Science.gov (United States)

    Steinbrink, Claudia; Groth, Katarina; Lachmann, Thomas; Riecker, Axel

    2012-01-01

    This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in…

  16. Temporal integration depends on increased prestimulus beta band power

    NARCIS (Netherlands)

    Geerligs, Linda; Akyürek, Elkan G.

    2012-01-01

    Temporal integration was examined using a missing element task, in which task performance depends on the ability to integrate brief successive stimulus displays. Previous studies have suggested that temporal integration is under endogenous control and that integration is more likely when stimuli

  17. Plasticity in bilateral superior temporal cortex: Effects of deafness and cochlear implantation on auditory and visual speech processing.

    Science.gov (United States)

    Anderson, Carly A; Lazard, Diane S; Hartley, Douglas E H

    2017-01-01

    While many individuals can benefit substantially from cochlear implantation, the ability to perceive and understand auditory speech with a cochlear implant (CI) remains highly variable amongst adult recipients. Importantly, auditory performance with a CI cannot be reliably predicted based solely on routinely obtained information regarding clinical characteristics of the CI candidate. This review argues that central factors, notably cortical function and plasticity, should also be considered as important contributors to the observed individual variability in CI outcome. Superior temporal cortex (STC), including auditory association areas, plays a crucial role in the processing of auditory and visual speech information. The current review considers evidence of cortical plasticity within bilateral STC, and how these effects may explain variability in CI outcome. Furthermore, evidence of audio-visual interactions in temporal and occipital cortices is examined, and relation to CI outcome is discussed. To date, longitudinal examination of changes in cortical function and plasticity over the period of rehabilitation with a CI has been restricted by methodological challenges. The application of functional near-infrared spectroscopy (fNIRS) in studying cortical function in CI users is becoming increasingly recognised as a potential solution to these problems. Here we suggest that fNIRS offers a powerful neuroimaging tool to elucidate the relationship between audio-visual interactions, cortical plasticity during deafness and following cochlear implantation, and individual variability in auditory performance with a CI. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Multiscale temporal integrators for fluctuating hydrodynamics

    Science.gov (United States)

    Delong, Steven; Sun, Yifei; Griffith, Boyce E.; Vanden-Eijnden, Eric; Donev, Aleksandar

    2014-12-01

    Following on our previous work [S. Delong, B. E. Griffith, E. Vanden-Eijnden, and A. Donev, Phys. Rev. E 87, 033302 (2013), 10.1103/PhysRevE.87.033302], we develop temporal integrators for solving Langevin stochastic differential equations that arise in fluctuating hydrodynamics. Our simple predictor-corrector schemes add fluctuations to standard second-order deterministic solvers in a way that maintains second-order weak accuracy for linearized fluctuating hydrodynamics. We construct a general class of schemes and recommend two specific schemes: an explicit midpoint method and an implicit trapezoidal method. We also construct predictor-corrector methods for integrating the overdamped limit of systems of equations with a fast and slow variable in the limit of infinite separation of the fast and slow time scales. We propose using random finite differences to approximate some of the stochastic drift terms that arise because of the kinetic multiplicative noise in the limiting dynamics. We illustrate our integrators on two applications involving the development of giant nonequilibrium concentration fluctuations in diffusively mixing fluids. We first study the development of giant fluctuations in recent experiments performed in microgravity using an overdamped integrator. We then include the effects of gravity and find that we also need to include the effects of fluid inertia, which affects the dynamics of the concentration fluctuations greatly at small wave numbers.

  19. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT

    Directory of Open Access Journals (Sweden)

    Sahar Mohammad Esmaeilzadeh

    2013-10-01

    Full Text Available Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT and play therapy (PT. There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children.   Materials and Methods: In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected.    Results: Recent technologies have brought about great advancement in the field of hearing disorders. Now these impairments can be detected at birth, and in the majority of cases, hearing impaired children can develop fluent spoken language through audition. According to researches on the relationship between hearing impaired children’s communication and language skills and different approaches of therapy, it is known that learning through listening and

  20. Formation of temporal-feature maps in the barn owl's auditory system

    Science.gov (United States)

    Kempter, Richard

    2000-03-01

    Computational maps are of central importance to the brain's representation of the outside world. The question of how maps are formed during ontogenetic development is a subject of intense research (Hubel & Wiesel, Proc R Soc B 198:1, 1977; Buonomano & Merzenich, Annu Rev Neurosci 21:149, 1998). The development in the primary visual cortex is in principle well explained compared to that in the auditory system, partly because the mechanisms underlying the formation of temporal-feature maps are hardly understood (Carr, Annu Rev Neurosci 16:223, 1993). Through a modelling study based on computer simulations in a system of spiking neurons a solution is offered to the problem of how a map of interaural time differences is set up in the nucleus laminaris of the barn owl, as a typical example. An array of neurons is able to represent interaural time differences in an orderly manner, viz., a map, if homosynaptic spike-based Hebbian learning (Gerstner et al, Nature 383:76, 1996; Kempter et al, Phys Rev E 59:4498, 1999) is combined with a presynaptic propagation of synaptic modifications (Fitzsimonds & Poo, Physiol Rev 78:143, 1998). The latter may be orders of magnitude weaker than the former. The algorithm is a key mechanism to the formation of temporal-feature maps on a submillisecond time scale.

  1. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  2. Sensitivity of cochlear nucleus neurons to spatio-temporal changes in auditory nerve activity.

    Science.gov (United States)

    Wang, Grace I; Delgutte, Bertrand

    2012-12-01

    The spatio-temporal pattern of auditory nerve (AN) activity, representing the relative timing of spikes across the tonotopic axis, contains cues to perceptual features of sounds such as pitch, loudness, timbre, and spatial location. These spatio-temporal cues may be extracted by neurons in the cochlear nucleus (CN) that are sensitive to relative timing of inputs from AN fibers innervating different cochlear regions. One possible mechanism for this extraction is "cross-frequency" coincidence detection (CD), in which a central neuron converts the degree of coincidence across the tonotopic axis into a rate code by preferentially firing when its AN inputs discharge in synchrony. We used Huffman stimuli (Carney LH. J Neurophysiol 64: 437-456, 1990), which have a flat power spectrum but differ in their phase spectra, to systematically manipulate relative timing of spikes across tonotopically neighboring AN fibers without changing overall firing rates. We compared responses of CN units to Huffman stimuli with responses of model CD cells operating on spatio-temporal patterns of AN activity derived from measured responses of AN fibers with the principle of cochlear scaling invariance. We used the maximum likelihood method to determine the CD model cell parameters most likely to produce the measured CN unit responses, and thereby could distinguish units behaving like cross-frequency CD cells from those consistent with same-frequency CD (in which all inputs would originate from the same tonotopic location). We find that certain CN unit types, especially those associated with globular bushy cells, have responses consistent with cross-frequency CD cells. A possible functional role of a cross-frequency CD mechanism in these CN units is to increase the dynamic range of binaural neurons that process cues for sound localization.

  3. Auditory integration training and other sound therapies for autism spectrum disorders (ASD).

    Science.gov (United States)

    Sinha, Yashwant; Silove, Natalie; Hayen, Andrew; Williams, Katrina

    2011-12-07

    Auditory integration therapy was developed as a technique for improving abnormal sound sensitivity in individuals with behavioural disorders including autism spectrum disorders. Other sound therapies bearing similarities to auditory integration therapy include the Tomatis Method and Samonas Sound Therapy. To determine the effectiveness of auditory integration therapy or other methods of sound therapy in individuals with autism spectrum disorders. For this update, we searched the following databases in September 2010: CENTRAL (2010, Issue 2), MEDLINE (1950 to September week 2, 2010), EMBASE (1980 to Week 38, 2010), CINAHL (1937 to current), PsycINFO (1887 to current), ERIC (1966 to current), LILACS (September 2010) and the reference lists of published papers. One new study was found for inclusion. Randomised controlled trials involving adults or children with autism spectrum disorders. Treatment was auditory integration therapy or other sound therapies involving listening to music modified by filtering and modulation. Control groups could involve no treatment, a waiting list, usual therapy or a placebo equivalent. The outcomes were changes in core and associated features of autism spectrum disorders, auditory processing, quality of life and adverse events. Two independent review authors performed data extraction. All outcome data in the included papers were continuous. We calculated point estimates and standard errors from t-test scores and post-intervention means. Meta-analysis was inappropriate for the available data. We identified six randomised comtrolled trials of auditory integration therapy and one of Tomatis therapy, involving a total of 182 individuals aged three to 39 years. Two were cross-over trials. Five trials had fewer than 20 participants. Allocation concealment was inadequate for all studies. Twenty different outcome measures were used and only two outcomes were used by three or more studies. Meta-analysis was not possible due to very high

  4. Links Between Temporal Acuity and Multisensory Integration Across Life Span.

    Science.gov (United States)

    Stevenson, Ryan A; Baum, Sarah H; Krueger, Juliane; Newhouse, Paul A; Wallace, Mark T

    2017-04-27

    The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  6. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up-to-date...

  7. Noise-induced hearing loss increases the temporal precision of complex envelope coding by auditory-nerve fibers

    Directory of Open Access Journals (Sweden)

    Kenneth Stuart Henry

    2014-02-01

    Full Text Available While changes in cochlear frequency tuning are thought to play an important role in the perceptual difficulties of people with sensorineural hearing loss (SNHL, the possible role of temporal processing deficits remains less clear. Our knowledge of temporal envelope coding in the impaired cochlea is limited to two studies that examined auditory-nerve fiber responses to narrowband amplitude modulated stimuli. In the present study, we used Wiener-kernel analyses of auditory-nerve fiber responses to broadband Gaussian noise in anesthetized chinchillas to quantify changes in temporal envelope coding with noise-induced SNHL. Temporal modulation transfer functions (TMTFs and temporal windows of sensitivity to acoustic stimulation were computed from 2nd-order Wiener kernels and analyzed to estimate the temporal precision, amplitude, and latency of envelope coding. Noise overexposure was associated with slower (less negative TMTF roll-off with increasing modulation frequency and reduced temporal window duration. The results show that at equal stimulus sensation level, SNHL increases the temporal precision of envelope coding by 20-30%. Furthermore, SNHL increased the amplitude of envelope coding by 50% in fibers with CFs from 1-2 kHz and decreased mean response latency by 0.4 ms. While a previous study of envelope coding demonstrated a similar increase in response amplitude, the present study is the first to show enhanced temporal precision. This new finding may relate to the use of a more complex stimulus with broad frequency bandwidth and a dynamic temporal envelope. Exaggerated neural coding of fast envelope modulations may contribute to perceptual difficulties in people with SNHL by acting as a distraction from more relevant acoustic cues, especially in fluctuating background noise. Finally, the results underscore the value of studying sensory systems with more natural, real-world stimuli.

  8. Temporal modulation transfer functions measured from auditory-nerve responses following sensorineural hearing loss.

    Science.gov (United States)

    Kale, Sushrut; Heinz, Michael G

    2012-04-01

    The ability of auditory-nerve (AN) fibers to encode modulation frequencies, as characterized by temporal modulation transfer functions (TMTFs), generally shows a low-pass shape with a cut-off frequency that increases with fiber characteristic frequency (CF). Because AN-fiber bandwidth increases with CF, this result has been interpreted to suggest that peripheral filtering has a significant effect on limiting the encoding of higher modulation frequencies. Sensorineural hearing loss (SNHL), which is typically associated with broadened tuning, is thus predicted to increase the range of modulation frequencies encoded; however, perceptual studies have generally not supported this prediction. The present study sought to determine whether the range of modulation frequencies encoded by AN fibers is affected by SNHL, and whether the effects of SNHL on envelope coding are similar at all modulation frequencies within the TMTF passband. Modulation response gain for sinusoidally amplitude modulated (SAM) tones was measured as a function of modulation frequency, with the carrier frequency placed at fiber CF. TMTFs were compared between normal-hearing chinchillas and chinchillas with a noise-induced hearing loss for which AN fibers had significantly broadened tuning. Synchrony and phase responses for individual SAM tone components were quantified to explore a variety of factors that can influence modulation coding. Modulation gain was found to be higher than normal in noise-exposed fibers across the entire range of modulation frequencies encoded by AN fibers. The range of modulation frequencies encoded by noise-exposed AN fibers was not affected by SNHL, as quantified by TMTF 3- and 10-dB cut-off frequencies. These results suggest that physiological factors other than peripheral filtering may have a significant role in determining the range of modulation frequencies encoded in AN fibers. Furthermore, these neural data may help to explain the lack of a consistent association

  9. Temporal Modulation Transfer Functions Measured From Auditory-Nerve Responses Following Sensorineural Hearing Loss

    Science.gov (United States)

    Kale, Sushrut; Heinz, Michael G.

    2012-01-01

    The ability of auditory-nerve (AN) fibers to encode modulation frequencies, as characterized by temporal modulation transfer functions (TMTFs), generally shows a low-pass shape with a cut-off frequency that increases with fiber characteristic frequency (CF). Because AN-fiber bandwidth increases with CF, this result has been interpreted to suggest that peripheral filtering has a significant effect on limiting the encoding of higher modulation frequencies. Sensorineural hearing loss (SNHL), which is typically associated with broadened tuning, is thus predicted to increase the range of modulation frequencies encoded; however, perceptual studies have generally not supported this prediction. The present study sought to determine whether the range of modulation frequencies encoded by AN fibers is affected by SNHL, and whether the effects of SNHL on envelope coding are similar at all modulation frequencies within the TMTF passband. Modulation response gain for sinusoidally amplitude modulated (SAM) tones was measured as a function of modulation frequency, with the carrier frequency placed at fiber CF. TMTFs were compared between normal-hearing chinchillas and chinchillas with a noise-induced hearing loss for which AN fibers had significantly broadened tuning. Synchrony and phase responses for individual SAM-tone components were quantified to explore a variety of factors that can influence modulation coding. Modulation gain was found to be higher than normal in noise-exposed fibers across the entire range of modulation frequencies encoded by AN fibers. The range of modulation frequencies encoded by noise-exposed AN fibers was not affected by SNHL, as quantified by TMTF 3- and 10-dB cut-off frequencies. These results suggest that physiological factors other than peripheral filtering may have a significant role in determining the range of modulation frequencies encoded in AN fibers. Furthermore, these neural data may help to explain the lack of a consistent association

  10. Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns.

    Directory of Open Access Journals (Sweden)

    Andres Carrasco

    Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.

  11. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  12. The encoding of vowels and temporal speech cues in the auditory cortex of professional musicians: an EEG study.

    Science.gov (United States)

    Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz

    2013-07-01

    Here, we applied a multi-feature mismatch negativity (MMN) paradigm in order to systematically investigate the neuronal representation of vowels and temporally manipulated CV syllables in a homogeneous sample of string players and non-musicians. Based on previous work indicating an increased sensitivity of the musicians' auditory system, we expected to find that musically trained subjects will elicit increased MMN amplitudes in response to temporal variations in CV syllables, namely voice-onset time (VOT) and duration. In addition, since different vowels are principally distinguished by means of frequency information and musicians are superior in extracting tonal (and thus frequency) information from an acoustic stream, we also expected to provide evidence for an increased auditory representation of vowels in the experts. In line with our hypothesis, we could show that musicians are not only advantaged in the pre-attentive encoding of temporal speech cues, but most notably also in processing vowels. Additional "just noticeable difference" measurements suggested that the musicians' perceptual advantage in encoding speech sounds was more likely driven by the generic constitutional properties of a highly trained auditory system, rather than by its specialisation for speech representations per se. These results shed light on the origin of the often reported advantage of musicians in processing a variety of speech sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams

    Directory of Open Access Journals (Sweden)

    Yi-Huang eSu

    2014-12-01

    Full Text Available Both lower-level stimulus factors (e.g., temporal proximity and higher-level cognitive factors (e.g., content congruency are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently or upwards (incongruently to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  14. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  15. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  16. Respiratory sinus arrhythmia and auditory processing in autism: modifiable deficits of an integrated social engagement system?

    Science.gov (United States)

    Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J

    2013-06-01

    The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Large scale functional brain networks underlying temporal integration of audio-visual speech perception: An EEG study

    OpenAIRE

    G. Vinodh Kumar; Tamesh Halder; Amit Kumar Jaiswal; Abhishek Mukherjee; Dipanjan Roy; Arpan Banerjee

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. How...

  19. Effect of temporal predictability on the neural processing of self-triggered auditory stimulation during vocalization

    Directory of Open Access Journals (Sweden)

    Chen Zhaocong

    2012-05-01

    Full Text Available Abstract Background Sensory consequences of our own actions are perceived differently from the sensory stimuli that are generated externally. The present event-related potential (ERP study examined the neural responses to self-triggered stimulation relative to externally-triggered stimulation as a function of delays between the motor act and the stimulus onset. While sustaining a vowel phonation, subjects clicked a mouse and heard pitch-shift stimuli (PSS in voice auditory feedback at delays of either 0 ms (predictable or 500–1000 ms (unpredictable. The motor effect resulting from the mouse click was corrected in the data analyses. For the externally-triggered condition, PSS were delivered by a computer with a delay of 500–1000 ms after the vocal onset. Results As compared to unpredictable externally-triggered PSS, P2 responses to predictable self-triggered PSS were significantly suppressed, whereas an enhancement effect for P2 responses was observed when the timing of self-triggered PSS was unpredictable. Conclusions These findings demonstrate the effect of the temporal predictability of stimulus delivery with respect to the motor act on the neural responses to self-triggered stimulation. Responses to self-triggered stimulation were suppressed or enhanced compared with the externally-triggered stimulation when the timing of stimulus delivery was predictable or unpredictable. Enhancement effect of unpredictable self-triggered stimulation in the present study supports the idea that sensory suppression of self-produced action may be primarily caused by an accurate prediction of stimulus timing, rather than a movement-related non-specific suppression.

  20. Distinct Temporal Coordination of Spontaneous Population Activity between Basal Forebrain and Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Josue G. Yague

    2017-09-01

    Full Text Available The basal forebrain (BF has long been implicated in attention, learning and memory, and recent studies have established a causal relationship between artificial BF activation and arousal. However, neural ensemble dynamics in the BF still remains unclear. Here, recording neural population activity in the BF and comparing it with simultaneously recorded cortical population under both anesthetized and unanesthetized conditions, we investigate the difference in the structure of spontaneous population activity between the BF and the auditory cortex (AC in mice. The AC neuronal population show a skewed spike rate distribution, a higher proportion of short (≤80 ms inter-spike intervals (ISIs and a rich repertoire of rhythmic firing across frequencies. Although the distribution of spontaneous firing rate in the BF is also skewed, a proportion of short ISIs can be explained by a Poisson model at short time scales (≤20 ms and spike count correlations are lower compared to AC cells, with optogenetically identified cholinergic cell pairs showing exceptionally higher correlations. Furthermore, a smaller fraction of BF neurons shows spike-field entrainment across frequencies: a subset of BF neurons fire rhythmically at slow (≤6 Hz frequencies, with varied phase preferences to ongoing field potentials, in contrast to a consistent phase preference of AC populations. Firing of these slow rhythmic BF cells is correlated to a greater degree than other rhythmic BF cell pairs. Overall, the fundamental difference in the structure of population activity between the AC and BF is their temporal coordination, in particular their operational timescales. These results suggest that BF neurons slowly modulate downstream populations whereas cortical circuits transmit signals on multiple timescales. Thus, the characterization of the neural ensemble dynamics in the BF provides further insight into the neural mechanisms, by which brain states are regulated.

  1. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    Science.gov (United States)

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal

  2. Interactions between "what" and "when" in the auditory system: temporal predictability enhances repetition suppression.

    Science.gov (United States)

    Costa-Faidella, Jordi; Baldeweg, Torsten; Grimm, Sabine; Escera, Carles

    2011-12-14

    Neural activity in the auditory system decreases with repeated stimulation, matching stimulus probability in multiple timescales. This phenomenon, known as stimulus-specific adaptation, is interpreted as a neural mechanism of regularity encoding aiding auditory object formation. However, despite the overwhelming literature covering recordings from single-cell to scalp auditory-evoked potential (AEP), stimulation timing has received little interest. Here we investigated whether timing predictability enhances the experience-dependent modulation of neural activity associated with stimulus probability encoding. We used human electrophysiological recordings in healthy participants who were exposed to passive listening of sound sequences. Pure tones of different frequencies were delivered in successive trains of a variable number of repetitions, enabling the study of sequential repetition effects in the AEP. In the predictable timing condition, tones were delivered with isochronous interstimulus intervals; in the unpredictable timing condition, interstimulus intervals varied randomly. Our results show that unpredictable stimulus timing abolishes the early part of the repetition positivity, an AEP indexing auditory sensory memory trace formation, while leaving the later part (≈ >200 ms) unaffected. This suggests that timing predictability aids the propagation of repetition effects upstream the auditory pathway, most likely from association auditory cortex (including the planum temporale) toward primary auditory cortex (Heschl's gyrus) and beyond, as judged by the timing of AEP latencies. This outcome calls for attention to stimulation timing in future experiments regarding sensory memory trace formation in AEP measures and stimulus probability encoding in animal models.

  3. The Effect of Auditory Integration Training on the Working Memory of Adults with Different Learning Preferences

    Science.gov (United States)

    Ryan, Tamara E.

    2014-01-01

    The purpose of this study was to determine the effects of auditory integration training (AIT) on a component of the executive function of working memory; specifically, to determine if learning preferences might have an interaction with AIT to increase the outcome for some learners. The question asked by this quantitative pretest posttest design is…

  4. Slow Temporal Integration Enables Robust Neural Coding and Perception of a Cue to Sound Source Location.

    Science.gov (United States)

    Brown, Andrew D; Tollin, Daniel J

    2016-09-21

    In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of

  5. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    Science.gov (United States)

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  7. Temporal order perception of auditory stimuli is selectively modified by tonal and non-tonal language environments.

    Science.gov (United States)

    Bao, Yan; Szymaszek, Aneta; Wang, Xiaoying; Oron, Anna; Pöppel, Ernst; Szelag, Elzbieta

    2013-12-01

    The close relationship between temporal perception and speech processing is well established. The present study focused on the specific question whether the speech environment could influence temporal order perception in subjects whose language backgrounds are distinctively different, i.e., Chinese (tonal language) vs. Polish (non-tonal language). Temporal order thresholds were measured for both monaurally presented clicks and binaurally presented tone pairs. Whereas the click experiment showed similar order thresholds for the two language groups, the experiment with tone pairs resulted in different observations: while Chinese demonstrated better performance in discriminating the temporal order of two "close frequency" tone pairs (600 Hz and 1200 Hz), Polish subjects showed a reversed pattern, i.e., better performance for "distant frequency" tone pairs (400 Hz and 3000 Hz). These results indicate on the one hand a common temporal mechanism for perceiving the order of two monaurally presented stimuli, and on the other hand neuronal plasticity for perceiving the order of frequency-related auditory stimuli. We conclude that the auditory brain is modified with respect to temporal processing by long-term exposure to a tonal or a non-tonal language. As a consequence of such an exposure different cognitive modes of operation (analytic vs. holistic) are selected: the analytic mode is adopted for "distant frequency" tone pairs in Chinese and for "close frequency" tone pairs in Polish subjects, whereas the holistic mode is selected for "close frequency" tone pairs in Chinese and for "distant frequency" tone pairs in Polish subjects, reflecting a double dissociation of function. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Dissociation between spatial and temporal integration mechanisms in Vernier fusion.

    Science.gov (United States)

    Drewes, Jan; Zhu, Weina; Melcher, David

    2014-12-01

    The visual system constructs a percept of the world across multiple spatial and temporal scales. This raises the questions of whether different scales involve separate integration mechanisms and whether spatial and temporal factors are linked via spatio-temporal reference frames. We investigated this using Vernier fusion, a phenomenon in which the features of two Vernier stimuli presented in close spatio-temporal proximity are fused into a single percept. With increasing spatial offset, perception changes dramatically from a single percept into apparent motion and later, at larger offsets, into two separately perceived stimuli. We tested the link between spatial and temporal integration by presenting two successive Vernier stimuli presented at varying spatial and temporal offsets. The second Vernier either had the same or the opposite offset as the first. We found that the type of percept depended not only on spatial offset, as reported previously, but interacted with the temporal parameter as well. At temporal separations around 30-40 ms the majority of trials were perceived as motion, while above 70 ms predominantly two separate stimuli were reported. The dominance of the second Vernier varied systematically with temporal offset, peaking around 40 ms ISI. Same-offset conditions showed increasing amounts of perceived separation at large ISIs, but little dependence on spatial offset. As subjects did not always completely fuse stimuli, we separated trials by reported percept (single/fusion, motion, double/segregation). We found systematic indications of spatial fusion even on trials in which subjects perceived temporal segregation. These findings imply that spatial integration/fusion may occur even when the stimuli are perceived as temporally separate entities, suggesting that the mechanisms responsible for temporal segregation and spatial integration may not be mutually exclusive. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Spatial interactions determine temporal feature integration as revealed by unmasking

    Directory of Open Access Journals (Sweden)

    Michael H. Herzog

    2006-01-01

    Full Text Available Feature integration is one of the most fundamental problems in neuroscience. In a recent contribution, we showed that a trailing grating can diminish the masking effects one vernier exerts on another, preceding vernier. Here, we show that this temporal unmasking depends on neural spatial interactions related to the trailing grating. Hence, our paradigm allows us to study the spatio-temporal interactions underlying feature integration.

  10. Gait variability is altered in older adults when listening to auditory stimuli with differing temporal structures.

    Science.gov (United States)

    Kaipust, Jeffrey P; McGrath, Denise; Mukherjee, Mukul; Stergiou, Nicholas

    2013-08-01

    Gait variability in the context of a deterministic dynamical system may be quantified using nonlinear time series analyses that characterize the complexity of the system. Pathological gait exhibits altered gait variability. It can be either too periodic and predictable, or too random and disordered, as is the case with aging. While gait therapies often focus on restoration of linear measures such as gait speed or stride length, we propose that the goal of gait therapy should be to restore optimal gait variability, which exhibits chaotic fluctuations and is the balance between predictability and complexity. In this context, our purpose was to investigate how listening to different auditory stimuli affects gait variability. Twenty-seven young and 27 elderly subjects walked on a treadmill for 5 min while listening to white noise, a chaotic rhythm, a metronome, and with no auditory stimulus. Stride length, step width, and stride intervals were calculated for all conditions. Detrended Fluctuation Analysis was then performed on these time series. A quadratic trend analysis determined that an idealized inverted-U shape described the relationship between gait variability and the structure of the auditory stimuli for the elderly group, but not for the young group. This proof-of-concept study shows that the gait of older adults may be manipulated using auditory stimuli. Future work will investigate which structures of auditory stimuli lead to improvements in functional status in older adults.

  11. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  12. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  13. The time window of multisensory integration: relating reaction times and judgments of temporal order.

    Science.gov (United States)

    Diederich, Adele; Colonius, Hans

    2015-04-01

    Even though visual and auditory information of 1 and the same event often do not arrive at the sensory receptors at the same time, due to different physical transmission times of the modalities, the brain maintains a unitary perception of the event, at least within a certain range of sensory arrival time differences. The properties of this "temporal window of integration" (TWIN), its recalibration due to task requirements, attention, and other variables, have recently been investigated intensively. Up to now, however, there has been no consistent definition of "temporal window" across different paradigms for measuring its width. Here we propose such a definition based on our TWIN model (Colonius & Diederich, 2004). It applies to judgments of temporal order (or simultaneity) as well as to reaction time (RT) paradigms. Reanalyzing data from Mégevand, Molholm, Nayak, & Foxe (2013) by fitting the TWIN model to data from both paradigms, we confirmed the authors' hypothesis that the temporal window in an RT task tends to be wider than in a temporal-order judgment (TOJ) task. This first step toward a unified concept of TWIN should be a valuable tool in guiding investigations of the neural and cognitive bases of this so-far-somewhat elusive concept. (c) 2015 APA, all rights reserved).

  14. The pattern of Fos expression in the rat auditory brainstem changes with the temporal structure of binaural electrical intracochlear stimulation.

    Science.gov (United States)

    Jakob, Till F; Döring, Ulrike; Illing, Robert-Benjamin

    2015-04-01

    The immediate-early-gene c-fos with its protein product Fos has been used as a powerful tool to investigate neuronal activity and plasticity following sensory stimulation. Fos combines with Jun, another IEG product, to form the dimeric transcription factor activator protein 1 (AP-1) which has been implied in a variety of cellular functions like neuronal plasticity, apoptosis, and regeneration. The intracellular emergence of Fos indicates a functional state of nerve cells directed towards molecular and morphological changes. The central auditory system is construed to detect stimulus intensity, spectral composition, and binaural balance through neurons organized in a complex network of ascending, descending and commissural pathways. Here we compare monaural and binaural electrical intracochlear stimulation (EIS) in normal hearing and early postnatally deafened rats. Binaural stimulation was done either synchronously or asynchronously. The auditory brainstem of hearing and deaf rats responds differently, with a dramatically increasing Fos expression in the deaf group so as if the network had no pre-orientation for how to organize sensory activity. Binaural EIS does not result in a trivial sum of 2 independent monaural EIS, as asynchronous stimulation invokes stronger Fos activation compared to synchronous stimulation almost everywhere in the auditory brainstem. The differential response to synchronicity of the stimulation puts emphasis on the importance of the temporal structure of EIS with respect to its potential for changing brain structure and brain function in stimulus-specific ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Formal learning theory dissociates brain regions with different temporal integration.

    Science.gov (United States)

    Gläscher, Jan; Büchel, Christian

    2005-07-21

    Learning can be characterized as the extraction of reliable predictions about stimulus occurrences from past experience. In two experiments, we investigated the interval of temporal integration of previous learning trials in different brain regions using implicit and explicit Pavlovian fear conditioning with a dynamically changing reinforcement regime in an experimental setting. With formal learning theory (the Rescorla-Wagner model), temporal integration is characterized by the learning rate. Using fMRI and this theoretical framework, we are able to distinguish between learning-related brain regions that show long temporal integration (e.g., amygdala) and higher perceptual regions that integrate only over a short period of time (e.g., fusiform face area, parahippocampal place area). This approach allows for the investigation of learning-related changes in brain activation, as it can dissociate brain areas that differ with respect to their integration of past learning experiences by either computing long-term outcome predictions or instantaneous reinforcement expectancies.

  16. Organization of auditory areas in the superior temporal gyrus of marmoset monkeys revealed by real-time optical imaging.

    Science.gov (United States)

    Nishimura, Masataka; Takemoto, Makoto; Song, Wen-Jie

    2017-11-28

    The prevailing model of the primate auditory cortex proposes a core-belt-parabelt structure. The model proposes three auditory areas in the lateral belt region; however, it may contain more, as this region has been mapped only at a limited spatial resolution. To explore this possibility, we examined the auditory areas in the lateral belt region of the marmoset using a high-resolution optical imaging technique. Based on responses to pure tones, we identified multiple areas in the superior temporal gyrus. The three areas in the core region, the primary area (A1), the rostral area (R), and the rostrotemporal area, were readily identified from their frequency gradients and positions immediately ventral to the lateral sulcus. Three belt areas were identified with frequency gradients and relative positions to A1 and R that were in agreement with previous studies: the caudolateral area, the middle lateral area, and the anterolateral area (AL). Situated between R and AL, however, we identified two additional areas. The first was located caudoventral to R with a frequency gradient in the ventrocaudal direction, which we named the medial anterolateral (MAL) area. The second was a small area with no obvious tonotopy (NT), positioned between the MAL and AL areas. Both the MAL and NT areas responded to a wide range of frequencies (at least 2-24 kHz). Our results suggest that the belt region caudoventral to R is more complex than previously proposed, and we thus call for a refinement of the current primate auditory cortex model.

  17. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...

  18. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  19. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...

  20. Visual-Auditory Integration during Speech Imitation in Autism

    Science.gov (United States)

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  1. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  2. HIT, hallucination focused integrative treatment as early intervention in psychotic adolescents with auditory hallucinations : a pilot study

    NARCIS (Netherlands)

    Jenner, JA; van de Willige, G

    Objective: Early intervention in psychosis is considered important in relapse prevention. Limited results of monotherapies prompt to development of multimodular programmes. The present study tests feasibility and effectiveness of HIT, an integrative early intervention treatment for auditory

  3. Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.

    Science.gov (United States)

    Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel

    2008-07-01

    The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  4. Processamento temporal, localização e fechamento auditivo em portadores de perda auditiva unilateral Temporal processing, localization and auditory closure in individuals with unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Regiane Nishihata

    2012-01-01

    , sound localization, and auditory closure, and to investigate possible associations with complaints of learning, communication and language difficulties in individuals with unilateral hearing loss. METHODS: Participants were 26 individuals with ages between 8 and 15 years, divided into two groups: Unilateral hearing loss group; and Normal hearing group. Each group was composed of 13 individuals, matched by gender, age and educational level. All subjects were submitted to anamnesis, peripheral hearing evaluation, and auditory processing evaluation through behavioral tests of sound localization, sequential memory, Random Detection Gap test, and speech-in-noise test. Nonparametric statistical tests were used to compare the groups, considering the presence or absence of hearing loss and the ear with hearing loss. RESULTS: Unilateral hearing loss started during preschool, and had unknown or identified etiologies, such as meningitis, traumas or mumps. Most individuals reported delays in speech, language and learning developments, especially those with hearing loss in the right ear. The group with hearing loss had worse responses in the abilities of temporal ordering and resolution, sound localization and auditory closure. Individuals with hearing loss in the left ear showed worse results than those with hearing loss in the right ear in all abilities, except in sound localization. CONCLUSION: The presence of unilateral hearing loss causes sound localization, auditory closure, temporal ordering and temporal resolution difficulties. Individuals with unilateral hearing loss in the right ear have more complaints than those with unilateral hearing loss in the left ear. Individuals with hearing loss in the left ear have more difficulties in auditory closure, temporal resolution, and temporal ordering.

  5. Asymmetry of temporal auditory T-complex: right ear-left hemisphere advantage in Tb timing in children.

    Science.gov (United States)

    Bruneau, Nicole; Bidet-Caulet, Aurélie; Roux, Sylvie; Bonnet-Brilhault, Frédérique; Gomot, Marie

    2015-02-01

    To investigate brain asymmetry of the temporal auditory evoked potentials (T-complex) in response to monaural stimulation in children compared to adults. Ten children (7 to 9 years) and ten young adults participated in the study. All were right-handed. The auditory stimuli used were tones (1100 Hz, 70 dB SPL, 50 ms duration) delivered monaurally (right, left ear) at four different levels of stimulus onset asynchrony (700-1100-1500-3000 ms). Latency and amplitude of responses were measured at left and right temporal sites according to the ear stimulated. Peaks of the three successive deflections (Na-Ta-Tb) of the T-complex were greater in amplitude and better defined in children than in adults. Amplitude measurements in children indicated that Na culminates on the left hemisphere whatever the ear stimulated whereas Ta and Tb culminate on the right hemisphere but for left ear stimuli only. Peak latency displayed different patterns of asymmetry. Na and Ta displayed shorter latencies for contralateral stimulation. The original finding was that Tb peak latency was the shortest at the left temporal site for right ear stimulation in children. Amplitude increased and/or peak latency decreased with increasing SOA, however no interaction effect was found with recording site or with ear stimulated. Our main original result indicates a right ear-left hemisphere timing advantage for Tb peak in children. The Tb peak would therefore be a good candidate as an electrophysiological marker of ear advantage effects during dichotic stimulation and of functional inter-hemisphere interactions and connectivity in children. Copyright © 2014. Published by Elsevier B.V.

  6. Temporal Response Properties of the Auditory Nerve in Implanted Children with Auditory Neuropathy Spectrum Disorder and Implanted Children with Sensorineural Hearing Loss.

    Science.gov (United States)

    He, Shuman; Abbas, Paul J; Doyle, Danielle V; McFayden, Tyler C; Mulherin, Stephen

    2016-01-01

    This study aimed to (1) characterize temporal response properties of the auditory nerve in implanted children with auditory neuropathy spectrum disorder (ANSD), and (2) compare results recorded in implanted children with ANSD with those measured in implanted children with sensorineural hearing loss (SNHL). Participants included 28 children with ANSD and 29 children with SNHL. All subjects used cochlear nucleus devices in their test ears. Both ears were tested in 6 children with ANSD and 3 children with SNHL. For all other subjects, only one ear was tested. The electrically evoked compound action potential (ECAP) was measured in response to each of the 33 pulses in a pulse train (excluding the second pulse) for one apical, one middle-array, and one basal electrode. The pulse train was presented in a monopolar-coupled stimulation mode at 4 pulse rates: 500, 900, 1800, and 2400 pulses per second. Response metrics included the averaged amplitude, latencies of response components and response width, the alternating depth and the amount of neural adaptation. These dependent variables were quantified based on the last six ECAPs or the six ECAPs occurring within a time window centered around 11 to 12 msec. A generalized linear mixed model was used to compare these dependent variables between the 2 subject groups. The slope of the linear fit of the normalized ECAP amplitudes (re. amplitude of the first ECAP response) over the duration of the pulse train was used to quantify the amount of ECAP increment over time for a subgroup of 9 subjects. Pulse train-evoked ECAPs were measured in all but 8 subjects (5 with ANSD and 3 with SNHL). ECAPs measured in children with ANSD had smaller amplitude, longer averaged P2 latency and greater response width than children with SNHL. However, differences in these two groups were only observed for some electrodes. No differences in averaged N1 latency or in the alternating depth were observed between children with ANSD and children with

  7. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  8. The role of auditory temporal cues in the fluency of stuttering adults

    Directory of Open Access Journals (Sweden)

    Juliana Furini

    Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.

  9. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    2010-07-01

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  10. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Directory of Open Access Journals (Sweden)

    Jifan Zhou

    Full Text Available Visual crowding-the inability to see an object when it is surrounded by flankers in the periphery-does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1 and Event-Related Potential (Experiment 2 and 3 measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination integration-the simplest kind of temporal semantic integration-did not occur in visual crowding (Experiment 4. Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.

  11. Auditory temporal-regularity processing correlates with language and literacy skill in early adulthood.

    Science.gov (United States)

    Grube, Manon; Cooper, Freya E; Griffiths, Timothy D

    2013-01-01

    This work tests the hypothesis that language skill depends on the ability to incorporate streams of sound into an accurate temporal framework. We tested the ability of young English-speaking adults to process single time intervals and rhythmic sequences of such intervals, hypothesized to be relevant to the analysis of the temporal structure of language. The data implicate a specific role for the ability to process beat-based temporal regularities in phonological language and literacy skill.

  12. Spatio-temporal data analytics for wind energy integration

    CERN Document Server

    Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief presents spatio-temporal data analytics for wind energy integration using stochastic modeling and optimization methods. It explores techniques for efficiently integrating renewable energy generation into bulk power grids. The operational challenges of wind, and its variability are carefully examined. A spatio-temporal analysis approach enables the authors to develop Markov-chain-based short-term forecasts of wind farm power generation. To deal with the wind ramp dynamics, a support vector machine enhanced Markov model is introduced. The stochastic optimization of economic di

  13. Processamento auditivo, resolução temporal e teste de detecção de gap: revisão da literatura Auditory processing, temporal resolution and gap detection test: literature review

    Directory of Open Access Journals (Sweden)

    Alessandra Giannella Samelli

    2008-01-01

    Full Text Available TEMA: processamento auditivo temporal e resolução temporal. OBJETIVO: realizar revisão teórica sobre processamento auditivo e resolução temporal, bem como sobre os diferentes parâmetros de marcadores utilizados em testes de detecção de gap e como eles podem interferir na determinação dos limiares. CONCLUSÃO: o processamento auditivo e a resolução temporal são fundamentais para o desenvolvimento da linguagem. Em virtude dos diferentes parâmetros que podem ser utilizados no teste em questão, os limiares de detecção de gap podem variar consideravelmente.BACKGROUND: temporal auditory processing and temporal resolution. PURPOSE: promote a theoretical approach on auditory processing, temporal resolution, and different parameters of markers used at various gap detection tests and how they can interfere in threshold determination. CONCLUSION: auditory processing and temporal resolution are key-factors for language development. The diverse parameters that can be used in the study of gap detection thresholds can result in quite discrepant thresholds.

  14. Speech repetition as a window on the neurobiology of auditory-motor integration for speech: A voxel-based lesion symptom mapping study.

    Science.gov (United States)

    Rogalsky, Corianne; Poppa, Tasha; Chen, Kuan-Hua; Anderson, Steven W; Damasio, Hanna; Love, Tracy; Hickok, Gregory

    2015-05-01

    For more than a century, speech repetition has been used as an assay for gauging the integrity of the auditory-motor pathway in aphasia, thought classically to involve a linkage between Wernicke's area and Broca's area via the arcuate fasciculus. During the last decade, evidence primarily from functional imaging in healthy individuals has refined this picture both computationally and anatomically, suggesting the existence of a cortical hub located at the parietal-temporal boundary (area Spt) that functions to integrate auditory and motor speech networks for both repetition and spontaneous speech production. While functional imaging research can pinpoint the regions activated in repetition/auditory-motor integration, lesion-based studies are needed to infer causal involvement. Previous lesion studies of repetition have yielded mixed results with respect to Spt's critical involvement in speech repetition. The present study used voxel-based lesion symptom mapping (VLSM) to investigate the neuroanatomy of repetition of both real words and non-words in a sample of 47 patients with focal left hemisphere brain damage. VLSMs identified a large voxel cluster spanning gray and white matter in the left temporal-parietal junction, including area Spt, where damage was significantly related to poor non-word repetition. Repetition of real words implicated a very similar dorsal network including area Spt. Cortical regions including Spt were implicated in repetition performance even when white matter damage was factored out. In addition, removing variance associated with speech perception abilities did not alter the overall lesion pattern for either task. Together with past functional imaging work, our results suggest that area Spt is integral in both word and non-word repetition, that its contribution is above and beyond that made by white matter pathways, and is not driven by perceptual processes alone. These findings are highly consistent with the claim that Spt is an area of

  15. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  16. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Interactions between stimulus-specific adaptation and visual auditory integration in the forebrain of the barn owl.

    Science.gov (United States)

    Reches, Amit; Netser, Shai; Gutfreund, Yoram

    2010-05-19

    Neural adaptation and visual auditory integration are two well studied and common phenomena in the brain, yet little is known about the interaction between them. In the present study, we investigated a visual forebrain area in barn owls, the entopallium (E), which has been shown recently to encompass auditory responses as well. Responses of neurons to sequences of visual, auditory, and bimodal (visual and auditory together) events were analyzed. Sequences comprised two stimuli, one with a low probability of occurrence and the other with a high probability. Neurons in the E tended to respond more strongly to low probability visual stimuli than to high probability stimuli. Such a phenomenon is known as stimulus-specific adaptation (SSA) and is considered to be a neural correlate of change detection. Responses to the corresponding auditory sequences did not reveal an equivalent tendency. Interestingly, however, SSA to bimodal events was stronger than to visual events alone. This enhancement was apparent when the visual and auditory stimuli were presented from matching locations in space (congruent) but not when the bimodal stimuli were spatially incongruent. These findings suggest that the ongoing task of detecting unexpected events can benefit from the integration of visual and auditory information.

  18. Specialization of left auditory cortex for speech perception in man depends on temporal coding

    National Research Council Canada - National Science Library

    Liégeois-Chauvel, C; de Graaf, J B; Laguitton, V; Chauvel, P

    1999-01-01

    ...). Natural voiced /ba/, /da/, /ga/) and voiceless (/pa/, /ta/, /ka/) syllables, spoken by a native French speaker, were used to study the processing of a specific temporally based acoustico-phonetic feature, the voice onset time (VOT...

  19. The adaptation of visual and auditory integration in the barn owl superior colliculus with Spike Timing Dependent Plasticity.

    Science.gov (United States)

    Huo, Juan; Murray, Alan

    2009-09-01

    To localize a seen object, the superior colliculus of the barn owl integrates the visual and auditory localization cues which are accessed from the sensory system of the brain. These cues are formed as visual and auditory maps. The alignment between visual and auditory maps is very important for accurate localization in prey behavior. Blindness or prism wearing may interfere this alignment. The juvenile barn owl could adapt its auditory map to this mismatch after several weeks training. Here we investigate this process by building a computational model of auditory and visual integration in deep Superior Colliculus (SC). The adaptation of the map alignment is based on activity dependent axon developing in Inferior Colliculus (IC). This axon growing process is instructed by an inhibitory network in SC while the strength of the inhibition is adjusted by Spike Timing Dependent Plasticity (STDP). The simulation results of this model are in line with the biological experiment and support the idea that STDP is involved in the alignment of sensory maps. This model also provides a new spiking neuron based mechanism capable of eliminating the disparity in visual and auditory map integration.

  20. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  1. Suppressed Visual Looming Stimuli are Not Integrated with Auditory Looming Signals: Evidence from Continuous Fash Suppression

    Directory of Open Access Journals (Sweden)

    Pieter Moors

    2015-02-01

    Full Text Available Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  2. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  3. MEG evidence that the central auditory system simultaneously encodes multiple temporal cues

    NARCIS (Netherlands)

    Simpson, M.I.G.; Barnes, G.R.; Johnson, S.R.; Hillebrand, A.; Singh, K.D.; Green, G.G.R.

    2009-01-01

    Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of

  4. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition.

    Science.gov (United States)

    Füllgrabe, Christian; Moore, Brian C J; Stone, Michael A

    2014-01-01

    Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60-79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125-6 kHz were matched to nine young (YNH; 18-27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5-180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric

  5. Anatomical Pathways for Auditory Memory in Primates

    Directory of Open Access Journals (Sweden)

    Monica Munoz-Lopez

    2010-10-01

    Full Text Available Episodic memory or the ability to store context-rich information about everyday events depends on the hippocampal formation (entorhinal cortex, subiculum, presubiculum, parasubiculum, hippocampus proper, and dentate gyrus. A substantial amount of behavioral-lesion and anatomical studies have contributed to our understanding of the organization of how visual stimuli are retained in episodic memory. However, whether auditory memory is organized similarly is still unclear. One hypothesis is that, like the ‘visual ventral stream’ for which the connections of the inferior temporal gyrus with the perirhinal cortex are necessary for visual recognition in monkeys, direct connections between the auditory association areas of the superior temporal gyrus and the hippocampal formation and with the parahippocampal region (temporal pole, perhirinal, and posterior parahippocampal cortices might also underlie recognition memory for sounds. Alternatively, the anatomical organization of memory could be different in audition. This alternative ‘indirect stream’ hypothesis posits that, unlike the visual association cortex, the majority of auditory association cortex makes one or more synapses in intermediate, polymodal areas, where they may integrate information from other sensory modalities, before reaching the medial temporal memory system. This review considers anatomical studies that can support either one or both hypotheses – focusing on anatomical studies on the primate brain that have reported not only direct auditory association connections with medial temporal areas, but, importantly, also possible indirect pathways for auditory information to reach the medial temporal lobe memory system.

  6. Central auditory processing. III. The "cocktail party" effect and anterior temporal lobectomy.

    Science.gov (United States)

    Efron, R; Crandall, P H; Koss, B; Divenyi, P L; Yund, E W

    1983-07-01

    The capacity to selectively attend to only one of multiple, spatially separated. simultaneous sound sources--the "cocktail party" effect--was evaluated in normal subjects and in those with anterior temporal lobectomy using common environmental sounds. A significant deficit in this capacity was observed for those stimuli located on the side of space contralateral to the lobectomy, a finding consistent with the hypothesis that within each anterior temporal lobe is a mechanism that is normally capable of enhancing the perceptual salience of one acoustic stimulus on the opposite side of space, when other sound sources are present on that side. Damage to this mechanism also appears to be associated with a deficit of spatial localization for sounds contralateral to the lesion.

  7. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  8. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  9. Aging and Spectro-Temporal Integration of Speech

    Directory of Open Access Journals (Sweden)

    John H. Grose

    2016-10-01

    Full Text Available The purpose of this study was to determine the effects of age on the spectro-temporal integration of speech. The hypothesis was that the integration of speech fragments distributed over frequency, time, and ear of presentation is reduced in older listeners—even for those with good audiometric hearing. Younger, middle-aged, and older listeners (10 per group with good audiometric hearing participated. They were each tested under seven conditions that encompassed combinations of spectral, temporal, and binaural integration. Sentences were filtered into two bands centered at 500 Hz and 2500 Hz, with criterion bandwidth tailored for each participant. In some conditions, the speech bands were individually square wave interrupted at a rate of 10 Hz. Configurations of uninterrupted, synchronously interrupted, and asynchronously interrupted frequency bands were constructed that constituted speech fragments distributed across frequency, time, and ear of presentation. The over-arching finding was that, for most configurations, performance was not differentially affected by listener age. Although speech intelligibility varied across condition, there was no evidence of performance deficits in older listeners in any condition. This study indicates that age, per se, does not necessarily undermine the ability to integrate fragments of speech dispersed across frequency and time.

  10. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  11. Medial temporal lobe roles in human path integration.

    Directory of Open Access Journals (Sweden)

    Naohide Yamamoto

    Full Text Available Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  12. Validation of three adaptations of the Meaningful Auditory Integration Scale (MAIS) to German, English and Polish.

    Science.gov (United States)

    Weichbold, Viktor; Anderson, Ilona; D'Haese, Patrick

    2004-03-01

    The Meaningful Auditory Integration Scale (MAIS) is a parent-report questionnaire for assessing auditory behaviour in aurally habilitated children. This study addressed the reliability and convergent validity of three different language versions of the MAIS: English, German, and Polish. In total, 114 parents (English, n = 27: Polish, n = 37; German, n = 50) completed the MAIS preoperatively and at 6 months after cochlear implantation. Internal reliability (Cronbach's alpha) ranged from 0.92 to 0.95 preoperatively, and from 0.87 to 0.93 at 6 months. Split-half reliability was at least 0.90 preoperatively, and ranged from 0.76 to 0.89 at 6 months. Corrected item-total correlation coefficients were significant (p MAIS with the Listening Progress Profile (LP), as a measure for convergent validity, yielded coefficients between 0.81 and 0.73 preoperatively, and between 0.79 and 0.61 at 6 months. These findings demonstrate high reliability and convergent validity of the three MAIS versions.

  13. Enabling an Integrated Rate-temporal Learning Scheme on Memristor

    Science.gov (United States)

    He, Wei; Huang, Kejie; Ning, Ning; Ramanathan, Kiruthika; Li, Guoqi; Jiang, Yu; Sze, Jiayin; Shi, Luping; Zhao, Rong; Pei, Jing

    2014-04-01

    Learning scheme is the key to the utilization of spike-based computation and the emulation of neural/synaptic behaviors toward realization of cognition. The biological observations reveal an integrated spike time- and spike rate-dependent plasticity as a function of presynaptic firing frequency. However, this integrated rate-temporal learning scheme has not been realized on any nano devices. In this paper, such scheme is successfully demonstrated on a memristor. Great robustness against the spiking rate fluctuation is achieved by waveform engineering with the aid of good analog properties exhibited by the iron oxide-based memristor. The spike-time-dependence plasticity (STDP) occurs at moderate presynaptic firing frequencies and spike-rate-dependence plasticity (SRDP) dominates other regions. This demonstration provides a novel approach in neural coding implementation, which facilitates the development of bio-inspired computing systems.

  14. Reinforcement Probability Modulates Temporal Memory Selection and Integration Processes

    Science.gov (United States)

    Matell, Matthew S.; Kurti, Allison N.

    2013-01-01

    We have previously shown that rats trained in a mixed-interval peak procedure (tone = 4s, light = 12s) respond in a scalar manner at a time in between the trained peak times when presented with the stimulus compound (Swanton & Matell, 2011). In our previous work, the two component cues were reinforced with different probabilities (short = 20%, long = 80%) to equate response rates, and we found that the compound peak time was biased toward the cue with the higher reinforcement probability. Here, we examined the influence that different reinforcement probabilities have on the temporal location and shape of the compound response function. We found that the time of peak responding shifted as a function of the relative reinforcement probability of the component cues, becoming earlier as the relative likelihood of reinforcement associated with the short cue increased. However, as the relative probabilities of the component cues grew dissimilar, the compound peak became non-scalar, suggesting that the temporal control of behavior shifted from a process of integration to one of selection. As our previous work has utilized durations and reinforcement probabilities more discrepant than those used here, these data suggest that the processes underlying the integration/selection decision for time are based on cue value. PMID:23896560

  15. Effects of sensorineural hearing loss on temporal coding of harmonic and inharmonic tone complexes in the auditory nerve.

    Science.gov (United States)

    Kale, Sushrut; Micheyl, Christophe; Heinz, Michael G

    2013-01-01

    Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners-especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information and could be due to degraded phase locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to band-pass-filtered harmonic and inharmonic complex tones were -measured in chinchillas with either normal-hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce -different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase locking remained intact. These results suggest that perceptual "TFS-processing" deficits do not simply reflect degraded phase locking at the level of the AN. To the extent that performance in F0- and harmonic/inharmonic discrimination

  16. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Independent or integrated processing of interaural time and level differences in human auditory cortex?

    Science.gov (United States)

    Altmann, Christian F; Terada, Satoshi; Kashino, Makio; Goto, Kazuhiro; Mima, Tatsuya; Fukuyama, Hidenao; Furukawa, Shigeto

    2014-06-01

    Sound localization in the horizontal plane is mainly determined by interaural time differences (ITD) and interaural level differences (ILD). Both cues result in an estimate of sound source location and in many real-life situations these two cues are roughly congruent. When stimulating listeners with headphones it is possible to counterbalance the two cues, so called ITD/ILD trading. This phenomenon speaks for integrated ITD/ILD processing at the behavioral level. However, it is unclear at what stages of the auditory processing stream ITD and ILD cues are integrated to provide a unified percept of sound lateralization. Therefore, we set out to test with human electroencephalography for integrated versus independent ITD/ILD processing at the level of preattentive cortical processing by measuring the mismatch negativity (MMN) to changes in sound lateralization. We presented a series of diotic standards (perceived at a midline position) that were interrupted by deviants that entailed either a change in a) ITD only, b) ILD only, c) congruent ITD and ILD, or d) counterbalanced ITD/ILD (ITD/ILD trading). The sound stimuli were either i) pure tones with a frequency of 500 Hz, or ii) amplitude modulated tones with a carrier frequency of 4000 Hz and a modulation frequency of 125 Hz. We observed significant MMN for the ITD/ILD traded deviants in case of the 500 Hz pure tones, and for the 4000 Hz amplitude-modulated tone. This speaks for independent processing of ITD and ILD at the level of the MMN within auditory cortex. However, the combined ITD/ILD cues elicited smaller MMN than the sum of the MMN induced in response to ITD and ILD cues presented in isolation for 500 Hz, but not 4000 Hz, suggesting independent processing for the higher frequency only. Thus, the two markers for independent processing - additivity and cue-conflict - resulted in contradicting conclusions with a dissociation between the lower (500 Hz) and higher frequency (4000 Hz) bands. Copyright © 2014

  18. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  19. Auditory adaptation improves tactile frequency perception

    NARCIS (Netherlands)

    Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.

    2017-01-01

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals

  20. Laevo: A Temporal Desktop Interface for Integrated Knowledge Work

    DEFF Research Database (Denmark)

    Jeuris, Steven; Houben, Steven; Bardram, Jakob

    2014-01-01

    Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual...... configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different...... states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which...

  1. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system

  2. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Miyako

    1988-07-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone.

  3. The specialized structure of human language cortex: pyramidal cell size asymmetries within auditory and language-associated regions of the temporal lobes.

    Science.gov (United States)

    Hutsler, Jeffrey J

    2003-08-01

    Functional lateralization of language within the cerebral cortex has long driven the search for structural asymmetries that might underlie language asymmetries. Most examinations of structural asymmetry have focused upon the gross size and shape of cortical regions in and around language areas. In the last 20 years several labs have begun to document microanatomical asymmetries in the structure of language-associated cortical regions. Such microanatomic results provide useful constraints and clues to our understanding of the biological bases of language specialization in the cortex. In a previous study we documented asymmetries in the size of a specific class of pyramidal cells in the superficial cortical layers. The present work uses a nonspecific stain for cell bodies to demonstrate the presence of an asymmetry in layer III pyramidal cell sizes within auditory, secondary auditory and language-associated regions of the temporal lobes. Specifically, the left hemisphere contains a greater number of the largest pyramidal cells, those that are thought to be the origin of long-range cortico-cortical connections. These results are discussed in the context of cortical columns and how such an asymmetry might alter cortical processing. These findings, in conjunction with other asymmetries in cortical organization that have been documented within several labs, clearly demonstrate that the columnar and connective structure of auditory and language cortex in the left hemisphere is distinct from homotopic regions in the contralateral hemisphere.

  4. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    Science.gov (United States)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  5. Auditory-olfactory integration: congruent or pleasant sounds amplify odor pleasantness

    National Research Council Canada - National Science Library

    Seo, Han-Seok; Hummel, Thomas

    2011-01-01

    ...) and the "halo/horns effect" of auditory pleasantness (Experiment 2). First, in Experiment 1, participants were presented with congruent, incongruent, or neutral sounds before and during the presentation of odor...

  6. Temporal maps in appetitive Pavlovian conditioning.

    Science.gov (United States)

    Taylor, Kathleen M; Joseph, Victory; Zhao, Alice S; Balsam, Peter D

    2014-01-01

    Previous research suggests animals may integrate temporal information into mental representations, or temporal maps. We examined the parameters under which animals integrate temporal information in three appetitive conditioning experiments. In Experiment 1 the temporal relationship between 2 auditory cues was established during sensory preconditioning (SPC). Subsequently, rats were given first order conditioning (FOC) with one of the cues. Results showed integration of the order of cues between the SPC and FOC training phases. In subsequent experiments we tested the hypothesis that quantitative temporal information can be integrated across phases. In Experiment 2, SPC of two short auditory cues superimposed on a longer auditory cue was followed by FOC of either one of the short cues, or of the long cue at different times in the cue. Contrary to our predictions we did not find evidence of integration of temporal information across the phases of the experiment and instead responding to the SPC cues in Experiment 2 appeared to be dominated by generalization from the FOC cues. In Experiment 3 shorter auditory cues were superimposed on a longer duration light cue but with asynchronous onset and offset of the superimposed cues. There is some evidence consistent with the hypothesis that quantitative discrimination of whether reward should be expected during the early or later parts of a cue could be integrated across experiences. However, the pattern of responding within cues was not indicative of integration of quantitative temporal information. Generalization of expected times of reward during FOC seems to be the dominant determinant of within-cue response patterns in these experiments. Consequently, while we clearly demonstrated the integration of temporal order in the modulation of this dominant pattern we did not find strong evidence of integration of precise quantitative temporal information. This article is part of a Special Issue entitled: Associative and Temporal

  7. The Relationship between Brainstem Temporal Processing and Performance on Tests of Central Auditory Function in Children with Reading Disorders

    Science.gov (United States)

    Billiet, Cassandra R.; Bellis, Teri James

    2011-01-01

    Purpose: Studies using speech stimuli to elicit electrophysiologic responses have found approximately 30% of children with language-based learning problems demonstrate abnormal brainstem timing. Research is needed regarding how these responses relate to performance on behavioral tests of central auditory function. The purpose of the study was to…

  8. Gone in a Flash: Manipulation of Audiovisual Temporal Integration Using Transcranial Magnetic Stimulation

    Directory of Open Access Journals (Sweden)

    Roy eHamilton

    2013-09-01

    Full Text Available While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke, Vieth, Cottrell, and Mattingley (2012, we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams, et al., 2000. Slow repetitive (1Hz TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF, reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  9. Sensitivity of neurons in the auditory midbrain of the grassfrog to temporal characteristics of sound. II. Stimulation with amplitude modulated sound.

    Science.gov (United States)

    Epping, W J; Eggermont, J J

    1986-01-01

    The coding of fine-temporal structure of sound, especially of frequency of amplitude modulation, was investigated on the single-unit level in the auditory midbrain of the grassfrog. As stimuli sinusoidally amplitude modulated sound bursts and continuous sound with low-pass Gaussian noise amplitude modulation have been used. Both tonal and wideband noise carriers have been applied. The response to sinusoidally amplitude modulated sound bursts was studied in two aspects focussing on two types of possible codes: a rate code and a synchrony code. From the iso-intensity rate histogram five basic average response characteristics as function of modulation frequency have been observed: low-pass, band-pass, high-pass, bimodal and non-selective types. The synchronization capability, expressed in a synchronization index, was non-significant for 38% of the units and a low-pass function of modulation frequency for most of the other units. The stimulus-response relation to noise amplitude modulated sound was investigated by a non-linear system theoretical approach. On the basis of first- and second-order Wiener-Volterra kernels possible neural mechanisms accounting for temporal selectivity were obtained. About one quarter of the units had response characteristics that were invariant to changes in sound pressure level and spectral content of the carrier. These units may function as feature detectors of fine-temporal structure of sound. The spectro-temporal sensitivity range of the auditory midbrain of the grassfrog appeared not to be restricted to and showed no preference for the spectro-temporal characteristics of the ensemble of conspecific calls. Comparison of response characteristics to periodic click trains as studied in the companion paper (Epping and Eggermont, 1986) and sinusoidally amplitude modulated sound bursts revealed that the observed temporal sensitivity is due to a combination of sensitivities to sound periodicity and pulse duration. It was found that for most

  10. An Individual Differences Approach to Temporal Integration and Order Reversals in the Attentional Blink Task.

    Science.gov (United States)

    Willems, Charlotte; Saija, Jefta D; Akyürek, Elkan G; Martens, Sander

    2016-01-01

    The reduced ability to identify a second target when it is presented in close temporal succession of a first target is called the attentional blink (AB). Studies have shown large individual differences in AB task performance, where lower task performance has been associated with more reversed order reports of both targets if these were presented in direct succession. In order to study the suggestion that reversed order reports reflect loss of temporal information, in the current study, we investigated whether individuals with a larger AB have a higher tendency to temporally integrate both targets into one visual event by using an AB paradigm containing symbol target stimuli. Indeed, we found a positive relation between the tendency to temporally integrate information and individual AB magnitude. In contrast to earlier work, we found no relation between order reversals and individual AB magnitude. The occurrence of temporal integration was negatively related to the number of order reversals, indicating that individuals either integrated or separated and reversed information. We conclude that individuals with better AB task performance use a shorter time window to integrate information, and therefore have higher preservation of temporal information. Furthermore, order reversals observed in paradigms with alphanumeric targets indeed seem to at least partially reflect temporal integration of both targets. Given the negative relation between temporal integration and 'true' order reversals observed with the current symbolic target set, these two behavioral outcomes seem to be two sides of the same coin.

  11. Spectro-temporal interactions in auditory-visual perception: How the eyes modulate what the ears hear

    Science.gov (United States)

    Grant, Ken W.; van Wassenhove, Virginie

    2004-05-01

    Auditory-visual speech perception has been shown repeatedly to be both more accurate and more robust than auditory speech perception. Attempts to explain these phenomena usually treat acoustic and visual speech information (i.e., accessed via speechreading) as though they were derived from independent processes. Recent electrophysiological (EEG) studies, however, suggest that visual speech processes may play a fundamental role in modulating the way we hear. For example, both the timing and amplitude of auditory-specific event-related potentials as recorded by EEG are systematically altered when speech stimuli are presented audiovisually as opposed to auditorilly. In addition, the detection of a speech signal in noise is more readily accomplished when accompanied by video images of the speaker's production, suggesting that the influence of vision on audition occurs quite early in the perception process. But the impact of visual cues on what we ultimately hear is not limited to speech. Our perceptions of loudness, timbre, and sound source location can also be influenced by visual cues. Thus, for speech and nonspeech stimuli alike, predicting a listener's response to sound based on acoustic engineering principles alone may be misleading. Examples of acoustic-visual interactions will be presented which highlight the multisensory nature of our hearing experience.

  12. Auditory Hallucinations in Schizophrenia and Nonschizophrenia Populations : A Review and Integrated Model of Cognitive Mechanisms

    NARCIS (Netherlands)

    Waters, Flavie; Allen, Paul; Aleman, Andre; Fernyhough, Charles; Woodward, Todd S.; Badcock, Johanna C.; Barkus, Emma; Johns, Louise; Varese, Filippo; Menon, Mahesh; Vercammen, Ans; Laroi, Frank

    While the majority of cognitive studies on auditory hallucinations (AHs) have been conducted in schizophrenia (SZ), an increasing number of researchers are turning their attention to different clinical and nonclinical populations, often using SZ findings as a model for research. Recent advances

  13. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  14. Two visual targets for the price of one? : Pupil dilation shows reduced mental effort through temporal integration

    NARCIS (Netherlands)

    Wolff, Michael J; Scholz, Sabine; Akyürek, Elkan G; van Rijn, Hedderik

    In dynamic sensory environments, successive stimuli may be combined perceptually and represented as a single, comprehensive event by means of temporal integration. Such perceptual segmentation across time is intuitively plausible. However, the possible costs and benefits of temporal integration in

  15. Enlarged temporal integration window in schizophrenia indicated by the double-flash illusion.

    Science.gov (United States)

    Haß, Katharina; Sinke, Christopher; Reese, Tanya; Roy, Mandy; Wiswede, Daniel; Dillo, Wolfgang; Oranje, Bob; Szycik, Gregor R

    2017-03-01

    In the present study we were interested in the processing of audio-visual integration in schizophrenia compared to healthy controls. The amount of sound-induced double-flash illusions served as an indicator of audio-visual integration. We expected an altered integration as well as a different window of temporal integration for patients. Fifteen schizophrenia patients and 15 healthy volunteers matched for age and gender were included in this study. We used stimuli with eight different temporal delays (stimulus onset asynchronys (SOAs) 25, 50, 75, 100, 125, 150, 200 and 300 ms) to induce a double-flash illusion. Group differences and the widths of temporal integration windows were calculated on percentages of reported double-flash illusions. Patients showed significantly more illusions (ca. 36-44% vs. 9-16% in control subjects) for SOAs 150-300. The temporal integration window for control participants went from SOAs 25 to 200 whereas for patients integration was found across all included temporal delays. We found no significant relationship between the amount of illusions and either illness severity, chlorpromazine equivalent doses or duration of illness in patients. Our results are interpreted in favour of an enlarged temporal integration window for audio-visual stimuli in schizophrenia patients, which is consistent with previous research.

  16. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  17. Knockdown of the dyslexia-associated gene Kiaa0319 impairs temporal responses to speech stimuli in rat primary auditory cortex.

    Science.gov (United States)

    Centanni, T M; Booker, A B; Sloan, A M; Chen, F; Maher, B J; Carraway, R S; Khodaparast, N; Rennaker, R; LoTurco, J J; Kilgard, M P

    2014-07-01

    One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. A function for binaural integration in auditory grouping and segregation in the inferior colliculus.

    Science.gov (United States)

    Nakamoto, Kyle T; Shackleton, Trevor M; Magezi, David A; Palmer, Alan R

    2015-03-15

    Responses of neurons to binaural, harmonic complex stimuli in urethane-anesthetized guinea pig inferior colliculus (IC) are reported. To assess the binaural integration of harmonicity cues for sound segregation and grouping, responses were measured to harmonic complexes with different fundamental frequencies presented to each ear. Simultaneously gated harmonic stimuli with fundamental frequencies of 125 Hz and 145 Hz were presented to the left and right ears, respectively, and recordings made from 96 neurons with characteristic frequencies >2 kHz in the central nucleus of the IC. Of these units, 70 responded continuously throughout the stimulus and were excited by the stimulus at the contralateral ear. The stimulus at the ipsilateral ear excited (EE: 14%; 10/70), inhibited (EI: 33%; 23/70), or had no significant effect (EO: 53%; 37/70), defined by the effect on firing rate. The neurons phase locked to the temporal envelope at each ear to varying degrees depending on signal level. Many of the cells (predominantly EO) were dominated by the response to the contralateral stimulus. Another group (predominantly EI) synchronized to the contralateral stimulus and were suppressed by the ipsilateral stimulus in a phasic manner. A third group synchronized to the stimuli at both ears (predominantly EE). Finally, a group only responded when the waveform peaks from each ear coincided. We conclude that these groups of neurons represent different "streams" of information but exhibit modifications of the response rather than encoding a feature of the stimulus, like pitch. Copyright © 2015 the American Physiological Society.

  19. A function for binaural integration in auditory grouping and segregation in the inferior colliculus

    Science.gov (United States)

    Shackleton, Trevor M.; Magezi, David A.; Palmer, Alan R.

    2014-01-01

    Responses of neurons to binaural, harmonic complex stimuli in urethane-anesthetized guinea pig inferior colliculus (IC) are reported. To assess the binaural integration of harmonicity cues for sound segregation and grouping, responses were measured to harmonic complexes with different fundamental frequencies presented to each ear. Simultaneously gated harmonic stimuli with fundamental frequencies of 125 Hz and 145 Hz were presented to the left and right ears, respectively, and recordings made from 96 neurons with characteristic frequencies >2 kHz in the central nucleus of the IC. Of these units, 70 responded continuously throughout the stimulus and were excited by the stimulus at the contralateral ear. The stimulus at the ipsilateral ear excited (EE: 14%; 10/70), inhibited (EI: 33%; 23/70), or had no significant effect (EO: 53%; 37/70), defined by the effect on firing rate. The neurons phase locked to the temporal envelope at each ear to varying degrees depending on signal level. Many of the cells (predominantly EO) were dominated by the response to the contralateral stimulus. Another group (predominantly EI) synchronized to the contralateral stimulus and were suppressed by the ipsilateral stimulus in a phasic manner. A third group synchronized to the stimuli at both ears (predominantly EE). Finally, a group only responded when the waveform peaks from each ear coincided. We conclude that these groups of neurons represent different “streams” of information but exhibit modifications of the response rather than encoding a feature of the stimulus, like pitch. PMID:25540219

  20. Bilingualism protects anterior temporal lobe integrity in aging.

    Science.gov (United States)

    Abutalebi, Jubin; Canini, Matteo; Della Rosa, Pasquale A; Sheung, Lo Ping; Green, David W; Weekes, Brendan S

    2014-09-01

    Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Eye Can Hear Clearly Now: Inverse Effectiveness in Natural Audiovisual Speech Processing Relies on Long-Term Crossmodal Temporal Integration.

    Science.gov (United States)

    Crosse, Michael J; Di Liberto, Giovanni M; Lalor, Edmund C

    2016-09-21

    Speech comprehension is improved by viewing a speaker's face, especially in adverse hearing conditions, a principle known as inverse effectiveness. However, the neural mechanisms that help to optimize how we integrate auditory and visual speech in such suboptimal conversational environments are not yet fully understood. Using human EEG recordings, we examined how visual speech enhances the cortical representation of auditory speech at a signal-to-noise ratio that maximized the perceptual benefit conferred by multisensory processing relative to unisensory processing. We found that the influence of visual input on the neural tracking of the audio speech signal was significantly greater in noisy than in quiet listening conditions, consistent with the principle of inverse effectiveness. Although envelope tracking during audio-only speech was greatly reduced by background noise at an early processing stage, it was markedly restored by the addition of visual speech input. In background noise, multisensory integration occurred at much lower frequencies and was shown to predict the multisensory gain in behavioral performance at a time lag of ∼250 ms. Critically, we demonstrated that inverse effectiveness, in the context of natural audiovisual (AV) speech processing, relies on crossmodal integration over long temporal windows. Our findings suggest that disparate integration mechanisms contribute to the efficient processing of AV speech in background noise. The behavioral benefit of seeing a speaker's face during conversation is especially pronounced in challenging listening environments. However, the neural mechanisms underlying this phenomenon, known as inverse effectiveness, have not yet been established. Here, we examine this in the human brain using natural speech-in-noise stimuli that were designed specifically to maximize the behavioral benefit of audiovisual (AV) speech. We find that this benefit arises from our ability to integrate multimodal information over

  2. Temporal integration of loudness measured using categorical loudness scaling and matching procedures

    OpenAIRE

    Valente, Daniel L.; Joshi, Suyash N.; Jesteadt, Walt

    2011-01-01

    Temporal integration of loudness of 1 kHz tones with 5 and 200 ms durations was assessed in four subjects using two loudness measurement procedures: categorical loudness scaling (CLS) and loudness matching. CLS provides a reliable and efficient procedure for collecting data on the temporal integration of loudness and previously reported nonmonotonic behavior observed at mid-sound pressure level levels is replicated with this procedure. Stimuli that are assigned to the same category are effect...

  3. fMRI of the auditory system: understanding the neural basis of auditory gestalt.

    Science.gov (United States)

    Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich

    2003-12-01

    Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.

  4. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM...

  5. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten

    2007-01-01

    to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies BMFs of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units...

  6. Comparison of bandwidths in the inferior colliculus and the auditory nerve. II: Measurement using a temporally manipulated stimulus

    NARCIS (Netherlands)

    M. Mc Laughlin (Myles); J.N. Chabwine; M. van der Heijden (Marcel); P.X. Joris (Philip)

    2008-01-01

    textabstractTo localize low-frequency sounds, humans rely on an interaural comparison of the temporally encoded sound waveform after peripheral filtering. This process can be compared with cross-correlation. For a broadband stimulus, after filtering, the correlation function has a damped oscillatory

  7. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  8. Analog very large-scale integrated (VLSI) implementation of a model of amplitude-modulation sensitivity in the auditory brainstem.

    Science.gov (United States)

    van Schaik, A; Meddis, R

    1999-02-01

    An analog very large-scale integrated (VLSI) implementation of a model of signal processing in the auditory brainstem is presented and evaluated. The implementation is based on a model of amplitude-modulation sensitivity in the central nucleus of the inferior colliculus (CNIC) previously described by Hewitt and Meddis [J. Acoust. Soc. Am. 95, 2145-2159 (1994)]. A single chip is used to implement the three processing stages of the model; the inner-hair cell (IHC), cochlear nucleus sustained-chopper, and CNIC coincidence-detection stages. The chip incorporates two new circuits: an IHC circuit and a neuron circuit. The input to the chip is taken from a "silicon cochlea" consisting of a cascade of filters that simulate basilar membrane mechanical frequency selectivity. The chip which contains 142 neurons was evaluated using amplitude-modulated pure tones. Individual cells in the CNIC stage demonstrate bandpass rate-modulation responses using these stimuli. The frequency of modulation is represented spatially in an array of these cells as the location of the cell generating the highest rate of action potentials. The chip processes acoustic signals in real time and demonstrates the feasibility of using analog VLSI to build and test auditory models that use large numbers of component neurons.

  9. Laminar differences in response to simple and spectro-temporally complex sounds in the primary auditory cortex of ketamine-anesthetized gerbils.

    Directory of Open Access Journals (Sweden)

    Markus K Schaefer

    Full Text Available In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA as well as local field potentials (LFP, and current source density (CSD waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of "call-specificity" in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns could indeed be important for encoding sounds that differ in their acoustic attributes.

  10. A hierarchical nest survival model integrating incomplete temporally varying covariates

    Science.gov (United States)

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  11. Segregated in perception, integrated for action: immunity of rhythmic sensorimotor coordination to auditory stream segregation.

    Science.gov (United States)

    Repp, Bruno H

    2009-03-01

    Auditory stream segregation can occur when tones of different pitch (A, B) are repeated cyclically: The larger the pitch separation and the faster the tempo, the more likely perception of two separate streams is to occur. The present study assessed stream segregation in perceptual and sensorimotor tasks, using identical ABBABB ... sequences. The perceptual task required detection of single phase-shifted A tones; this was expected to be facilitated by the presence of B tones unless segregation occurred. The sensorimotor task required tapping in synchrony with the A tones; here the phase correction response (PCR) to shifted A tones was expected to be inhibited by B tones unless segregation occurred. Two sequence tempi and three pitch separations (2, 10, and 48 semitones) were used with musically trained participants. Facilitation of perception occurred only at the smallest pitch separation, whereas the PCR was reduced equally at all separations. These results indicate that auditory action control is immune to perceptual stream segregation, at least in musicians. This may help musicians coordinate with diverse instruments in ensemble playing.

  12. Maturation of cortical auditory evoked potentials (CAEPs) to speech recorded from frontocentral and temporal sites: three months to eight years of age.

    Science.gov (United States)

    Shafer, Valerie L; Yu, Yan H; Wagner, Monica

    2015-02-01

    The goal of the current analysis was to examine the maturation of cortical auditory evoked potentials (CAEPs) from three months of age to eight years of age. The superior frontal positive-negative-positive sequence (P1, N2, P2) and the temporal site, negative-positive-negative sequence (possibly, Na, Ta, Tb of the T-complex) were examined. Event-related potentials were recorded from 63 scalp sites to a 250-ms vowel. Amplitude and latency of peaks were measured at left and right frontal sites (near Fz) and at left and right temporal sites (T7 and T8). In addition, the largest peak (typically corresponding to P1) was selected from global field power (GFP). The results revealed a large positive peak (P1) easily identified at frontal sites across all ages. The N2 emerged after 6 months of age and the following P2 between 8 and 30 months of age. The latencies of these peaks decreased exponentially with the most rapid decrease observed for P1. For amplitude, only P1 showed a clear relationship with age, becoming more positive in a somewhat linear fashion. At the temporal sites only a negative peak, which might be Na, was clearly observed at both left and right sites in children older than 14 months and peaking between 100 and 200 ms. P1 measures at frontal sites and Na peak latencies were moderately correlated. The temporal negative peak latency showed a different maturational timecourse (linear in nature) than the P1 peak, suggesting at least partial independence. Distinct Ta (positive) and Tb (negative) peaks, following Na and peaking between 120 and 220 ms were not consistently found in most age groups of children, except Ta which was present in 7 year olds. Future research, which includes manipulation of stimulus factors, and use of modeling techniques will be needed to explain the apparent, protracted maturation of the temporal site measures in the current study. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Distortions of temporal integration and perceived order caused by the interplay between stimulus contrast and duration

    NARCIS (Netherlands)

    Akyürek, Elkan G.; de Jong, Ritske

    2017-01-01

    Stimulus contrast and duration effects on visual temporal integration and order judgment were examined in a unified paradigm. Stimulus onset asynchrony was governed by the duration of the first stimulus in Experiment 1, and by the interstimulus interval in Experiment 2. In Experiment 1, integration

  14. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    Directory of Open Access Journals (Sweden)

    Maria Kon

    2015-05-01

    Full Text Available Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the mere succession of experiences. I question the presupposition that there is such an evident, clear distinction and suggest that, instead, how the distinction is drawn is context-dependent. After suggesting a way to modify the philosophical models of temporal experience to accommodate this context-dependency, I illustrate that these models can fruitfully incorporate features of research projects in embodied musical cognition. To do so I supplement a modified retentionalist model with aspects of recent work that links bodily movement with musical perception (Godøy, 2006; 2010a; Jensenius, Wanderley, Godøy, and Leman, 2010. The resulting model is shown to facilitate novel hypotheses, refine the notion of context-dependency and point towards means of extending the philosophical model and an existent research program.

  15. An exploratory study of temporal integration in the peripheral retina of myopes

    Science.gov (United States)

    Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.

    2017-08-01

    The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.

  16. Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation

    Science.gov (United States)

    Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr

    2017-12-01

    Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the

  17. Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration.

    Science.gov (United States)

    Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong

    2018-02-01

    Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Temporal Ventriloquism in Sensorimotor Synchronization

    Science.gov (United States)

    Parker, Melody Kay

    Perception of time is multisensory and therefore requires integration of the auditory and visual systems. Temporal ventriloquism is a phenomenon in which discrepant temporal aspects of multisensory stimuli are resolved through auditory dominance. Numerous prior experiments have demonstrated temporal ventriloquism using simple flash and click stimuli. The experiment presented herein employed a sensorimotor synchronization task to examine the effect of visual stimulus type across a range of stimulus onset asynchronies (SOA). This study compared sensorimotor response to three visual stimuli: a flash, a baton swinging, and a mallet striking a block. The results of the experiment indicated that the influence of SOA was greatly dependent on stimulus type. In contrast with the transient flash stimulus, the oscillatory visual stimuli provided more spatiotemporal information. This could explain the significantly reduced effect of temporal ventriloquism observed in response to the baton and mallet relative to the flash. Multisensory integration did not absolutely bias the auditory system; predictive visual dynamics proved useful in the unified perception of temporal occurrence.

  19. Screening Test for Auditory Processing (STAP): a preliminary report.

    Science.gov (United States)

    Yathiraj, Asha; Maggu, Akshay Raj

    2013-10-01

    The presence of auditory processing disorder in school-age children has been documented (Katz and Wilde, 1985; Chermak and Musiek, 1997; Jerger and Musiek, 2000; Muthuselvi and Yathiraj, 2009). In order to identify these children early, there is a need for a screening test that is not very time-consuming. The present study aimed to evaluate the independence of four subsections of the Screening Test for Auditory Processing (STAP) developed by Yathiraj and Maggu (2012). The test was designed to address auditory separation/closure, binaural integration, temporal resolution, and auditory memory in school-age children. The study also aimed to examine the number of children who are at risk for different auditory processes. Factor analysis research design was used in the current study. Four hundred school-age children consisting of 218 males and 182 females were randomly selected from 2400 children attending three schools. The children, aged 8 to 13 yr, were in grade three to eight class placements. DATA COLLECTION AND ANALYSES: The children were evaluated on the four subsections of the STAP (speech perception in noise, dichotic consonant-vowel [CV], gap detection, and auditory memory) in a quiet room within their school. The responses were analyzed using principal component analysis (PCA) and confirmatory factor analysis (CFA). In addition, the data were also analyzed to determine the number of children who were at risk for an auditory processing disorder (APD). Based on the PCA, three components with Eigen values greater than 1 were extracted. The orthogonal rotation of the variables using the Varimax technique revealed that component 1 consisted of binaural integration, component 2 consisted of temporal resolution, and component 3 was shared by auditory separation/closure and auditory memory. These findings were confirmed using CFA, where the predicted model displayed a good fit with or without the inclusion of the auditory memory subsection. It was determined that 16

  20. Temporal profiles of response enhancement in multisensory integration

    Directory of Open Access Journals (Sweden)

    Benjamin A Rowland

    2008-12-01

    Full Text Available Animals have evolved multiple senses that transduce different forms of energy as a way of increasing their sensitivity to environmental events. Each sense provides a unique and independent perspective on the world, and very often a single event stimulates several of them. In order to make best use of the available information, the brain has also evolved the capacity to integrate information across the senses (“multisensory integration”. This facilitates the detection, localization, and identification of a given event, and has obvious survival value for the individual and the species. Multisensory responses in the superior colliculus (SC evidence shorter latencies and are more robust at their onset. This is the phenomenon of initial response enhancement in multisensory integration, which is believed to a real time fusion of information across the senses. The present paper reviews two recent reports describing how the timing and robustness of sensory responses changes as a consequence of multisensory integration in the model system of the SC.

  1. Perception of global gestalt by temporal integration in simultanagnosia.

    Science.gov (United States)

    Huberle, Elisabeth; Rupek, Paul; Lappe, Markus; Karnath, Hans-Otto

    2009-01-01

    Patients with bilateral parieto-occipital brain damage may show intact processing of individual objects, while their perception of multiple objects is disturbed at the same time. The deficit is termed 'simultanagnosia' and has been discussed in the context of restricted visual working memory and impaired visuo-spatial attention. Recent observations indicated that the recognition of global shapes can be modulated by the spatial distance between individual objects in patients with simultanagnosia and thus is not an all-or-nothing phenomenon depending on spatial continuity. However, grouping mechanisms not only require the spatial integration of visual information, but also involve integration processes over time. The present study investigated motion-defined integration mechanisms in two patients with simultanagnosia. We applied hierarchical organized stimuli of global objects that consisted of coherently moving dots ('shape-from-motion'). In addition, we tested the patients' ability to recognize biological motion by presenting characteristic human movements ('point-light-walker'). The data revealed largely preserved perception of biological motion, while the perception of motion-defined shapes was impaired. Our findings suggest separate mechanisms underlying the recognition of biological motion and shapes defined by coherently moving dots. They thus argue against a restriction in the overall capacity of visual working memory over time as a general explanation for the impaired global shape recognition in patients with simultanagnosia.

  2. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    Science.gov (United States)

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-01-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  3. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  4. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  5. Large scale functional brain networks underlying temporal integration of audio-visual speech perception: An EEG study

    Directory of Open Access Journals (Sweden)

    G. Vinodh Kumar

    2016-10-01

    Full Text Available Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal speech sound (McGurk-effect when presented with incongruent audio-visual (AV speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal and the integrative brain sites in the vicinity of the superior temporal sulcus (STS for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV

  6. White Matter Integrity Dissociates Verbal Memory and Auditory Attention Span in Emerging Adults with Congenital Heart Disease.

    Science.gov (United States)

    Brewster, Ryan C; King, Tricia Z; Burns, Thomas G; Drossner, David M; Mahle, William T

    2015-01-01

    White matter disruptions have been identified in individuals with congenital heart disease (CHD). However, no specific theory-driven relationships between microstructural white matter disruptions and cognition have been established in CHD. We conducted a two-part study. First, we identified significant differences in fractional anisotropy (FA) of emerging adults with CHD using Tract-Based Spatial Statistics (TBSS). TBSS analyses between 22 participants with CHD and 18 demographically similar controls identified five regions of normal appearing white matter with significantly lower FA in CHD, and two higher. Next, two regions of lower FA in CHD were selected to examine theory-driven differential relationships with cognition: voxels along the left uncinate fasciculus (UF; a tract theorized to contribute to verbal memory) and voxels along the right middle cerebellar peduncle (MCP; a tract previously linked to attention). In CHD, a significant positive correlation between UF FA and memory was found, r(20)=.42, p=.049 (uncorrected). There was no correlation between UF and auditory attention span. A positive correlation between MCP FA and auditory attention span was found, r(20)=.47, p=.027 (uncorrected). There was no correlation between MCP and memory. In controls, no significant relationships were identified. These results are consistent with previous literature demonstrating lower FA in younger CHD samples, and provide novel evidence for disrupted white matter integrity in emerging adults with CHD. Furthermore, a correlational double dissociation established distinct white matter circuitry (UF and MCP) and differential cognitive correlates (memory and attention span, respectively) in young adults with CHD.

  7. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    Directory of Open Access Journals (Sweden)

    Jordi Navarra

    Full Text Available The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1 or from different spatial positions (Experiment 2. A simultaneity judgment task (SJ was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony was obtained using temporal order judgments (TOJs instead of SJs (Experiment 3. Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading that we most frequently encounter in the outside world (e.g., while perceiving distant events.

  8. Integrating Anxiety Reduction into an Existing Self-Management of Auditory Hallucinations Course.

    Science.gov (United States)

    Buccheri, Robin K; Trygstad, Louise Nigh; Buffum, Martha D; Ju, Dau-Shen; Dowling, Glenna A

    2017-05-01

    High levels of anxiety were found to interfere with voice hearers' ability to benefit from a 10-Session Behavioral Management of Auditory Hallucinations Course. The 10-session course was revised, adding anxiety reduction strategies to the first four classes and reinforcing those strategies in the remaining eight classes. A multi-site study (N = 27) used repeated measures to determine whether the new 12-session course would significantly reduce anxiety. Ten course leaders were trained and taught the course six times at three different outpatient mental health sites. Three measures of anxiety were used. The 12-session course was found to significantly reduce anxiety after the first four classes with further reduction at the end of the course. Eighty-eight percent of course participants reported the course was moderately to extremely helpful. They also reported that being in a group with others with similar symptoms was valuable. Course leaders reported learning about the prevalence and importance of treating voice hearers' anxiety. [Journal of Psychosocial Nursing and Mental Health Services, 55(5), 29-39.]. Copyright 2017, SLACK Incorporated.

  9. Long-term recovery from hippocampal-related behavioral and biochemical abnormalities induced by noise exposure during brain development. Evaluation of auditory pathway integrity.

    Science.gov (United States)

    Uran, S L; Gómez-Casati, M E; Guelman, L R

    2014-10-01

    Sound is an important part of man's contact with the environment and has served as critical means for survival throughout his evolution. As a result of exposure to noise, physiological functions such as those involving structures of the auditory and non-auditory systems might be damaged. We have previously reported that noise-exposed developing rats elicited hippocampal-related histological, biochemical and behavioral changes. However, no data about the time lapse of these changes were reported. Moreover, measurements of auditory pathway function were not performed in exposed animals. Therefore, with the present work, we aim to test the onset and the persistence of the different extra-auditory abnormalities observed in noise-exposed rats and to evaluate auditory pathway integrity. Male Wistar rats of 15 days were exposed to moderate noise levels (95-97 dB SPL, 2 h a day) during one day (acute noise exposure, ANE) or during 15 days (sub-acute noise exposure, SANE). Hippocampal biochemical determinations as well as short (ST) and long term (LT) behavioral assessments were performed. In addition, histological and functional evaluations of the auditory pathway were carried out in exposed animals. Our results show that hippocampal-related behavioral and biochemical changes (impairments in habituation, recognition and associative memories as well as distortion of anxiety-related behavior, decreases in reactive oxygen species (ROS) levels and increases in antioxidant enzymes activities) induced by noise exposure were almost completely restored by PND 90. In addition, auditory evaluation shows that increased cochlear thresholds observed in exposed rats were re-established at PND 90, although with a remarkable supra-threshold amplitude reduction. These data suggest that noise-induced hippocampal and auditory-related alterations are mostly transient and that the effects of noise on the hippocampus might be, at least in part, mediated by the damage on the auditory pathway

  10. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  11. Interregional alpha-band synchrony supports temporal cross-modal integration

    NARCIS (Netherlands)

    van Driel, J.; Knapen, T.H.J.; van Es, D.M.; Cohen, M.X.

    2014-01-01

    In a continuously changing environment, time is a key property that tells us whether information from the different senses belongs together. Yet, little is known about how the brain integrates temporal information across sensory modalities. Using high-density EEG combined with a novel psychometric

  12. What you see is what you remember : Visual chunking by temporal integration enhances working memory

    NARCIS (Netherlands)

    Akyürek, Elkan G.; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-01-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the

  13. Stochastic undersampling steepens auditory threshold/duration functions: Implications for understanding auditory deafferentation and aging

    Directory of Open Access Journals (Sweden)

    Frederic eMarmel

    2015-05-01

    Full Text Available It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013 to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds ( 50 ms did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that that the auditory system of audiometrically normal older listeners might not be ‘slower than normal’, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time.

  14. FACILITATING INTEGRATED SPATIO-TEMPORAL VISUALIZATION AND ANALYSIS OF HETEROGENEOUS ARCHAEOLOGICAL AND PALAEOENVIRONMENTAL RESEARCH DATA

    Directory of Open Access Journals (Sweden)

    C. Willmes

    2012-07-01

    Full Text Available In the context of the Collaborative Research Centre 806 "Our way to Europe" (CRC806, a research database is developed for integrating data from the disciplines of archaeology, the geosciences and the cultural sciences to facilitate integrated access to heterogeneous data sources. A practice-oriented data integration concept and its implementation is presented in this contribution. The data integration approach is based on the application of Semantic Web Technology and is applied to the domains of archaeological and palaeoenvironmental data. The aim is to provide integrated spatio-temporal access to an existing wealth of data to facilitate research on the integrated data basis. For the web portal of the CRC806 research database (CRC806-Database, a number of interfaces and applications have been evaluated, developed and implemented for exposing the data to interactive analysis and visualizations.

  15. What You See Is What You Remember: Visual Chunking by Temporal Integration Enhances Working Memory.

    Science.gov (United States)

    Akyürek, Elkan G; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-12-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.

  16. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  17. Neurophysiological Indices of Atypical Auditory Processing and Multisensory Integration Are Associated with Symptom Severity in Autism

    Science.gov (United States)

    Brandwein, Alice B.; Foxe, John J.; Butler, John S.; Frey, Hans-Peter; Bates, Juliana C.; Shulman, Lisa H.; Molholm, Sophie

    2015-01-01

    Atypical processing and integration of sensory inputs are hypothesized to play a role in unusual sensory reactions and social-cognitive deficits in autism spectrum disorder (ASD). Reports on the relationship between objective metrics of sensory processing and clinical symptoms, however, are surprisingly sparse. Here we examined the relationship…

  18. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Science.gov (United States)

    Okada, Kayoko; Venezia, Jonathan H; Matchin, William; Saberi, Kourosh; Hickok, Gregory

    2013-01-01

    Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS). Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  19. An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Kayoko Okada

    Full Text Available Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS and this site has been implicated in multimodal integration. Traditionally, multisensory interactions are considered high-level processing that engages heteromodal association cortices (such as STS. Recent work, however, challenges this notion and suggests that multisensory interactions may occur in low-level unimodal sensory cortices. While previous audiovisual speech studies demonstrate that high-level multisensory interactions occur in pSTS, what remains unclear is how early in the processing hierarchy these multisensory interactions may occur. The goal of the present fMRI experiment is to investigate how visual speech can influence activity in auditory cortex above and beyond its response to auditory speech. In an audiovisual speech experiment, subjects were presented with auditory speech with and without congruent visual input. Holding the auditory stimulus constant across the experiment, we investigated how the addition of visual speech influences activity in auditory cortex. We demonstrate that congruent visual speech increases the activity in auditory cortex.

  20. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  1. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  2. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  3. Harnessing temporal modes for multi-photon quantum information processing based on integrated optics.

    Science.gov (United States)

    Harder, G; Ansari, V; Bartley, T J; Brecht, B; Silberhorn, C

    2017-08-06

    In the last few decades, there has been much progress on low loss waveguides, very efficient photon-number detectors and nonlinear processes. Engineered sum-frequency conversion is now at a stage where it allows operation on arbitrary temporal broadband modes, thus making the spectral degree of freedom accessible for information coding. Hereby the information is often encoded into the temporal modes of a single photon. Here, we analyse the prospect of using multi-photon states or squeezed states in different temporal modes based on integrated optics devices. We describe an analogy between mode-selective sum-frequency conversion and a network of spatial beam splitters. Furthermore, we analyse the limits on the achievable squeezing in waveguides with current technology and the loss limits in the conversion process.This article is part of the themed issue 'Quantum technology for the 21st century'. © 2017 The Author(s).

  4. Patterns of morphological integration between parietal and temporal areas in the human skull.

    Science.gov (United States)

    Bruner, Emiliano; Pereira-Pedro, Ana Sofia; Bastir, Markus

    2017-10-01

    Modern humans have evolved bulging parietal areas and large, projecting temporal lobes. Both changes, largely due to a longitudinal expansion of these cranial and cerebral elements, were hypothesized to be the result of brain evolution and cognitive variations. Nonetheless, the independence of these two morphological characters has not been evaluated. Because of structural and functional integration among cranial elements, changes in the position of the temporal poles can be a secondary consequence of parietal bulging and reorientation of the head axis. In this study, we use geometric morphometrics to test the correlation between parietal shape and the morphology of the endocranial base in a sample of adult modern humans. Our results suggest that parietal proportions show no correlation with the relative position of the temporal poles within the spatial organization of the endocranial base. The vault and endocranial base are likely to be involved in distinct morphogenetic processes, with scarce or no integration between these two districts. Therefore, the current evidence rejects the hypothesis of reciprocal morphological influences between parietal and temporal morphology, suggesting that evolutionary spatial changes in these two areas may have been independent. However, parietal bulging exerts a visible effect on the rotation of the cranial base, influencing head position and orientation. This change can have had a major relevance in the reorganization of the head functional axis. © 2017 Wiley Periodicals, Inc.

  5. Temporal integration of loudness in listeners with hearing losses of primarily cochlear origin

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1999-01-01

    To investigate how hearing loss of primarily cochlear origin affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level for 15 listeners with cochlear impairments and for seven age-matched controls. Three frequencies, usually 0.5, 1, and 4......-frequency hearing losses (slopes >50 dB/octave) showed larger-than-normal maximal amounts of temporal integration (40 to 50 dB). This finding is consistent with the shallow loudness functions predicted by our excitation-pattern model for impaired listeners [, in Modeling Sensorineural Hearing Loss, edited by W....... Jesteadt (Erlbaum, Mahwah, NJ, 1997), pp. 187–198]. Loudness functions derived from impaired listeners' temporal-integration functions indicate that restoration of loudness in listeners with cochlear hearing loss usually will require the same gain whether the sound is short or long. ©1999 Acoustical...

  6. Comparing the influence of spectro-temporal integration in computational speech segregation

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2016-01-01

    The goal of computational speech segregation systems is to automatically segregate a target speaker from interfering maskers. Typically, these systems include a feature extraction stage in the front-end and a classification stage in the back-end. A spectrotemporal integration strategy can...... be applied in either the frontend, using the so-called delta features, or in the back-end, using a second classifier that exploits the posterior probability of speech from the first classifier across a spectro-temporal window. This study systematically analyzes the influence of such stages on segregation...... performance, the error distributions and intelligibility predictions. Results indicated that it could be problematic to exploit context in the back-end, even though such a spectro-temporal integration stage improves the segregation performance. Also, the results emphasized the potential need of a single...

  7. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  8. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  9. Sensorimotor impairment of speech auditory feedback processing in aphasia.

    Science.gov (United States)

    Behroozmand, Roozbeh; Phillip, Lorelei; Johari, Karim; Bonilha, Leonardo; Rorden, Chris; Hickok, Gregory; Fridriksson, Julius

    2018-01-15

    We investigated the brain network involved in speech sensorimotor processing by studying patients with post-stroke aphasia using an altered auditory feedback (AAF) paradigm. We combined lesion-symptom-mapping analysis and behavioral testing to examine the pervasiveness of speech sensorimotor deficits and their relationship with cortical damage. Sixteen participants with aphasia and sixteen neurologically intact individuals completed a speech task under AAF. The task involved producing speech vowel sounds under the real-time pitch-shifted auditory feedback alteration. This task provided an objective measure for each individual's ability to compensate for mismatch (error) in speech auditory feedback. Results indicated that compensatory speech responses to AAF were significantly diminished in participants with aphasia compared with control. We observed that within the aphasic group, subjects with lower scores on the speech repetition task exhibited greater degree of diminished responses. Lesion-symptom-mapping analysis revealed that the onset phase (50-150 ms) of diminished AAF responses were predicted by damage to auditory cortical regions within the superior and middle temporal gyrus, whereas the rising phase (150-250 ms) and the peak (250-350 ms) of diminished AAF responses were predicted with damage to the inferior frontal gyrus and supramarginal gyrus areas, respectively. These findings suggest that damage to the auditory, motor, and auditory-motor integration networks are associated with impaired sensorimotor function for speech error processing. We suggest that a sensorimotor integration network, as revealed by brain regions related to temporal specific components of AAF responses, is related to speech processing and specific aspects of speech impairment, notably repetition deficits, in individuals with aphasia. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  11. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  12. Visuo-tactile integration in autism: atypical temporal binding may underlie greater reliance on proprioceptive information.

    Science.gov (United States)

    Greenfield, Katie; Ropar, Danielle; Smith, Alastair D; Carey, Mark; Newport, Roger

    2015-01-01

    Evidence indicates that social functioning deficits and sensory sensitivities in autism spectrum disorder (ASD) are related to atypical sensory integration. The exact mechanisms underlying these integration difficulties are unknown; however, two leading accounts are (1) an over-reliance on proprioception and (2) atypical visuo-tactile temporal binding. We directly tested these theories by selectively manipulating proprioceptive alignment and visuo-tactile synchrony to assess the extent that these impact upon body ownership. Children with ASD and typically developing controls placed their hand into a multisensory illusion apparatus, which presented two, identical live video images of their own hand in the same plane as their actual hand. One virtual hand was aligned proprioceptively with the actual hand (the veridical hand), and the other was displaced to the left or right. While a brushstroke was applied to the participants' actual (hidden) hand, they observed the two virtual images of their hand also being stroked and were asked to identify their real hand. During brushing, one of three different temporal delays was applied to either the displaced hand or the veridical hand. Thus, only one virtual hand had synchronous visuo-tactile inputs. Results showed that visuo-tactile synchrony overrides incongruent proprioceptive inputs in typically developing children but not in autistic children. Evidence for both temporally extended visuo-tactile binding and a greater reliance on proprioception are discussed. This is the first study to provide definitive evidence for temporally extended visuo-tactile binding in ASD. This may result in reduced processing of amodal inputs (i.e. temporal synchrony) over modal-specific information (i.e. proprioception). This would likely lead to failures in appropriately binding information from related events, which would impact upon sensitivity to sensory stimuli, body representation and social processes such as empathy and imitation.

  13. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  14. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Directory of Open Access Journals (Sweden)

    Sonja Schall

    Full Text Available It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI data. Participants (17 normal participants, 17 developmental prosopagnosics first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker. This was followed by an auditory-only speech recognition task and a control task (voice recognition involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  15. Reconstructing speech from human auditory cortex.

    Directory of Open Access Journals (Sweden)

    Brian N Pasley

    2012-01-01

    Full Text Available How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

  16. [Application of simultaneous auditory evoked potentials and functional magnetic resonance recordings for examination of central auditory system--preliminary results].

    Science.gov (United States)

    Milner, Rafał; Rusiniak, Mateusz; Wolak, Tomasz; Piatkowska-Janko, Ewa; Naumczyk, Patrycja; Bogorodzki, Piotr; Senderski, Andrzej; Ganc, Małgorzata; Skarzyński, Henryk

    2011-01-01

    Processing of auditory information in central nervous system bases on the series of quickly occurring neural processes that cannot be separately monitored using only the fMRI registration. Simultaneous recording of the auditory evoked potentials, characterized by good temporal resolution, and the functional magnetic resonance imaging with excellent spatial resolution allows studying higher auditory functions with precision both in time and space. was to implement the simultaneous AEP-fMRI recordings method for the investigation of information processing at different levels of central auditory system. Five healthy volunteers, aged 22-35 years, participated in the experiment. The study was performed using high-field (3T) MR scanner from Siemens and 64-channel electrophysiological system Neuroscan from Compumedics. Auditory evoked potentials generated by acoustic stimuli (standard and deviant tones) were registered using modified odd-ball procedure. Functional magnetic resonance recordings were performed using sparse acquisition paradigm. The results of electrophysiological registrations have been worked out by determining voltage distributions of AEP on skull and modeling their bioelectrical intracerebral generators (dipoles). FMRI activations were determined on the basis of deviant to standard and standard to deviant functional contrasts. Results obtained from electrophysiological studies have been integrated with functional outcomes. Morphology, amplitude, latency and voltage distribution of auditory evoked potentials (P1, N1, P2) to standard stimuli presented during simultaneous AEP-fMRI registrations were very similar to the responses obtained outside scanner room. Significant fMRI activations to standard stimuli were found mainly in the auditory cortex. Activations in these regions corresponded with N1 wave dipoles modeled based on auditory potentials generated by standard tones. Auditory evoked potentials to deviant stimuli were recorded only outside the MRI

  17. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  18. Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing

    Science.gov (United States)

    Edwards, Erik; Chang, Edward F.

    2013-01-01

    Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819

  19. Temporal integration of loudness, loudness discrimination, and the form of the loudness function

    OpenAIRE

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1997-01-01

    Temporal integration for loudness of 5-kHz tones was measured as a function of level between 2 and 60 dB SL. Absolute thresholds and levels required to produce equal loudness were measured for 2-, 10-, 50- and 250-ms tones using adaptive, two interval, two alternative forced choice procedures. The procedure for loudness balances is new and obtained concurrent measurements for ten tone pairs in ten interleaved tracks. Each track converged at the level required to make the variable stimulus jus...

  20. Spatio-Temporal Estimation of Integrated Water Vapour Over the Malaysian Peninsula during Monsoon Season

    Science.gov (United States)

    Salihin, S.; Musa, T. A.; Radzi, Z. Mohd

    2017-10-01

    This paper provides the precise information on spatial-temporal distribution of water vapour that was retrieved from Zenith Path Delay (ZPD) which was estimated by Global Positioning System (GPS) processing over the Malaysian Peninsular. A time series analysis of these ZPD and Integrated Water Vapor (IWV) values was done to capture the characteristic on their seasonal variation during monsoon seasons. This study was found that the pattern and distribution of atmospheric water vapour over Malaysian Peninsular in whole four years periods were influenced by two inter-monsoon and two monsoon seasons which are First Inter-monsoon, Second Inter-monsoon, Southwest monsoon and Northeast monsoon.

  1. Border-cell migration requires integration of spatial and temporal signals by the BTB protein Abrupt.

    Science.gov (United States)

    Jang, Anna C-C; Chang, Yu-Chiuan; Bai, Jianwu; Montell, Denise

    2009-05-01

    During development, elaborate patterns of cell differentiation and movement must occur in the correct locations and at the proper times. Developmental timing has been studied less than spatial pattern formation, and the mechanisms integrating the two are poorly understood. Border-cell migration in the Drosophila ovary occurs specifically at stage 9. Timing of the migration is regulated by the steroid hormone ecdysone, whereas spatial patterning of the migratory population requires localized activity of the JAK-STAT pathway. Ecdysone signalling is patterned spatially as well as temporally, although the mechanisms are not well understood. In stage 9 egg chambers, ecdysone signalling is highest in anterior follicle cells including the border cells. We identify the gene abrupt as a repressor of ecdysone signalling and border-cell migration. Abrupt protein is normally lost from border-cell nuclei during stage 9, in response to JAK-STAT activity. This contributes to the spatial pattern of the ecdysone response. Abrupt attenuates ecdysone signalling by means of a direct interaction with the basic helix-loop-helix (bHLH) domain of the P160 ecdysone receptor coactivator Taiman (Tai). Taken together, these findings provide a molecular mechanism by which spatial and temporal cues are integrated.

  2. Chromatic temporal integration and retinal eccentricity: psychophysics, neurometric analysis and cortical pooling.

    Science.gov (United States)

    Swanson, William H; Pan, Fei; Lee, Barry B

    2008-11-01

    Psychophysical chromatic sensitivity deteriorates in peripheral retina, even after appropriate size scaling of targets. This decrease is more marked for stimuli targeted at the long- (L) to middle-wavelength (M) cone opponent system than for stimuli targeted at short-wavelength (S) pathways. Foveal chromatic mechanisms integrate over several hundred milliseconds for pulse detection. If the time course for integration were shorter in the periphery, this might account for sensitivity loss. Psychophysical chromatic temporal integration (critical duration) for human observers was estimated as a function of eccentricity. Critical duration decreased by a factor of 2 (from approximately 200 to approximately 100 ms) from the fovea to 20 degrees eccentricity. This partly (but not completely) accounts for the decrease in /L-M/ sensitivity in the periphery, but almost completely accounts for the decrease in S-cone sensitivity. Some loss of /L-M/I sensitivity thus has a cortical locus. In a physiological analysis, we consider how the /L-M/ cone parvocellular pathway integrates chromatic signals. Neurometric contrast sensitivities of individual retinal ganglion cells decreased with the square-root of stimulus duration (as expected from Poisson statistics of ganglion cell firing). In contrast, psychophysical data followed an inverse linear relationship (Bloch's law). Models of cortical pooling mechanisms incorporating uncertainty as to stimulus onset and duration can at least partially account for this discrepancy.

  3. Temporal Dynamics of the Integration of Intention and Outcome in Harmful and Helpful Moral Judgment

    Science.gov (United States)

    Gan, Tian; Lu, Xiaping; Li, Wanqing; Gui, Danyang; Tang, Honghong; Mai, Xiaoqin; Liu, Chao; Luo, Yue-Jia

    2016-01-01

    The ability to integrate the moral intention information with the outcome of an action plays a crucial role in mature moral judgment. Functional magnetic resonance imaging (fMRI) studies implicated that both prefrontal and temporo-parietal cortices are involved in moral intention and outcome processing. Here, we used the event-related potentials (ERPs) technique to investigate the temporal dynamics of the processing of the integration between intention and outcome information in harmful and helpful moral judgment. In two experiments, participants were asked to make moral judgments for agents who produced either negative/neutral outcomes with harmful/neutral intentions (harmful judgment) or positive/neutral outcomes with helpful/neutral intentions (helpful judgment). Significant ERP differences between attempted and successful actions over prefrontal and bilateral temporo-parietal regions were found in both harmful and helpful moral judgment, which suggest a possible time course of the integration processing in the brain, starting from the right temporo-parietal area (N180) to the left temporo-parietal area (N250), then the prefrontal area (FSW) and the right temporo-parietal area (TP450 and TPSW) again. These results highlighted the fast moral intuition reaction and the late integration processing over the right temporo-parietal area. PMID:26793144

  4. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis

  5. Temporal Processing and Reading Disability.

    Science.gov (United States)

    Share, David L.; Jorm, Anthony F.; Maclean, Rod; Matthews, Russell

    2002-01-01

    Examines the hypothesis that early auditory temporal processing deficits cause later specific reading disability by impairing phonological processing. Suggests that auditory temporal deficits in dyslexics may be associated with dysphasic-type symptoms observed by Tallal and her colleagues in specific language-impaired populations, but do not cause…

  6. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  7. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  8. Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection.

    Directory of Open Access Journals (Sweden)

    Takeshi Uno

    Full Text Available Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG and superior temporal sulcus (STS are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA, which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.

  9. Psychological Contract Development: An Integration of Existing Knowledge to Form a Temporal Model

    Directory of Open Access Journals (Sweden)

    Kelly Windle

    2014-07-01

    Full Text Available The psychological contract has received substantial theoretical attention over the past two decades as a popular framework within which to examine contemporary employment relationships. Previous research mostly examines breach and violation of the psychological contract and its impact on employee organization outcomes. Few studies have employed longitudinal, prospective research designs to investigate the psychological contract and as a result, psychological contract content and formation are incompletely understood. It is argued that employment relationships may be better proactively managed with greater understanding of formation and changes in the psychological contract. We examine existing psychological contract literature to identify five key factors proposed to contribute to the formation of psychological contracts. We extend the current research by integrating these factors for the first time into a temporal model of psychological contract development.

  10. Analysis of spatio-temporal dynamics of Arctic region vegetation based on integrated data processing

    Science.gov (United States)

    Mochalov, Viktor; Zelentsov, Viacheslav; Grigirieva, Olga; Brovkina, Olga; Lavrinenko, Igor; Pimanov, Ilia

    2017-04-01

    Currently, there is a significant amount of in-situ data, airborne and satellite observations for the assessment of tundra vegetation. However, the issues of simultaneous analysis of these data remain topical, as well as the development of methods for integrated processing of heterogeneous (in-situ, airborne, space) and multi-temporal data for analyzing the spatio-temporal dynamics of vegetation across large regions and identifying relationships of occurring changes. The study was aimed to fill this gap on the territory of Russia's Far North. The objectives of the study were: 1/ mapping of vegetation types; 2/ assessing the territories which are suitable for grazing reindeers in winter and summer periods; 3/ substantiation of requirements to remote sensing data for vegetation mapping; and 4/ identification of the territories under anthropogenic disturbances. The study area was located in the Nenets Autonomous Okrug of Russia. Time-series satellite Resurs-P, Kanopus-V and Sentinel-2 data, and geobotanical systematic description of study area were used for classification of vegetation types, identification of vegetation dynamic and disturbed territories. Territory for grazing reindeers were assessed based on map of vegetation types and thirty-year field monitoring of reindeers feed and habitats. The integrated processing of data used in the study was implemented by a complex methodical scheme, which included algorithms and methods for processing of satellite data, requirement to remote sensing data, decision to reduce the cost of data collection and to provide the required level of results quality, and recommendations for management of industrial activity in the Nenets Autonomous Okrug of Russia.

  11. From perception to action: temporal integrative functions of prefrontal and parietal neurons.

    Science.gov (United States)

    Quintana, J; Fuster, J M

    1999-01-01

    The dorsolateral prefrontal cortex (DPFC) and the posterior parietal cortex (PPC) are anatomically and functionally interconnected, and have been implicated in working memory and the preparation for behavioral action. To substantiate those functions at the neuronal level, we designed a visuomotor task that dissociated the perceptual and executive aspects of the perception-action cycle in both space and time. In that task, the trial-initiating cue (a color) indicated with different degrees of certainty the direction of the correct manual response 12 s later. We recorded extracellular activity from 258 prefrontal and 223 parietal units in two monkeys performing the task. In the DPFC, some units (memory cells) were attuned to the color of the cue, independent of the response-direction it connoted. Their discharge tended to diminish in the course of the delay between cue and response. In contrast, few color-related units were found in PPC, and these did not show decreasing patterns of delay activity. Other units in both cortices (set cells) were attuned to response-direction and tended to accelerate their firing in anticipation of the response and in proportion to the predictability of its direction. A third group of units was related to the determinacy of the act; their firing was attuned to the certainty with which the animal could predict the correct response, whatever its direction. Cells of the three types were found closely intermingled histologically. These findings further support and define the role of DPFC in executive functions and in the temporal closure of the perception-action cycle. The findings also agree with the involvement of PPC in spatial aspects of visuomotor behavior, and add a temporal integrative dimension to that involvement. Together, the results provide physiological evidence for the role of a prefrontal-parietal network in the integration of perception with action across time.

  12. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  13. Skill Learning for Intelligent Robot by Perception-Action Integration: A View from Hierarchical Temporal Memory

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2017-01-01

    Full Text Available Skill learning autonomously through interactions with the environment is a crucial ability for intelligent robot. A perception-action integration or sensorimotor cycle, as an important issue in imitation learning, is a natural mechanism without the complex program process. Recently, neurocomputing model and developmental intelligence method are considered as a new trend for implementing the robot skill learning. In this paper, based on research of the human brain neocortex model, we present a skill learning method by perception-action integration strategy from the perspective of hierarchical temporal memory (HTM theory. The sequential sensor data representing a certain skill from a RGB-D camera are received and then encoded as a sequence of Sparse Distributed Representation (SDR vectors. The sequential SDR vectors are treated as the inputs of the perception-action HTM. The HTM learns sequences of SDRs and makes predictions of what the next input SDR will be. It stores the transitions of the current perceived sensor data and next predicted actions. We evaluated the performance of this proposed framework for learning the shaking hands skill on a humanoid NAO robot. The experimental results manifest that the skill learning method designed in this paper is promising.

  14. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  15. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  16. Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training.

    Science.gov (United States)

    Chen, Joyce L; Penhune, Virginia B; Zatorre, Robert J

    2008-02-01

    Much is known about the motor system and its role in simple movement execution. However, little is understood about the neural systems underlying auditory-motor integration in the context of musical rhythm, or the enhanced ability of musicians to execute precisely timed sequences. Using functional magnetic resonance imaging, we investigated how performance and neural activity were modulated as musicians and nonmusicians tapped in synchrony with progressively more complex and less metrically structured auditory rhythms. A functionally connected network was implicated in extracting higher-order features of a rhythm's temporal structure, with the dorsal premotor cortex mediating these auditory-motor interactions. In contrast to past studies, musicians recruited the prefrontal cortex to a greater degree than nonmusicians, whereas secondary motor regions were recruited to the same extent. We argue that the superior ability of musicians to deconstruct and organize a rhythm's temporal structure relates to the greater involvement of the prefrontal cortex mediating working memory.

  17. Hey1 and Hey2 control the spatial and temporal pattern of mammalian auditory hair cell differentiation downstream of Hedgehog signaling.

    Science.gov (United States)

    Benito-Gonzalez, Ana; Doetzlhofer, Angelika

    2014-09-17

    Mechano-sensory hair cells (HCs), housed in the inner ear cochlea, are critical for the perception of sound. In the mammalian cochlea, differentiation of HCs occurs in a striking basal-to-apical and medial-to-lateral gradient, which is thought to ensure correct patterning and proper function of the auditory sensory epithelium. Recent studies have revealed that Hedgehog signaling opposes HC differentiation and is critical for the establishment of the graded pattern of auditory HC differentiation. However, how Hedgehog signaling interferes with HC differentiation is unknown. Here, we provide evidence that in the murine cochlea, Hey1 and Hey2 control the spatiotemporal pattern of HC differentiation downstream of Hedgehog signaling. It has been recently shown that HEY1 and HEY2, two highly redundant HES-related transcriptional repressors, are highly expressed in supporting cell (SC) and HC progenitors (prosensory cells), but their prosensory function remained untested. Using a conditional double knock-out strategy, we demonstrate that prosensory cells form and proliferate properly in the absence of Hey1 and Hey2 but differentiate prematurely because of precocious upregulation of the pro-HC factor Atoh1. Moreover, we demonstrate that prosensory-specific expression of Hey1 and Hey2 and its subsequent graded downregulation is controlled by Hedgehog signaling in a largely FGFR-dependent manner. In summary, our study reveals a critical role for Hey1 and Hey2 in prosensory cell maintenance and identifies Hedgehog signaling as a novel upstream regulator of their prosensory function in the mammalian cochlea. The regulatory mechanism described here might be a broadly applied mechanism for controlling progenitor behavior in the central and peripheral nervous system. Copyright © 2014 the authors 0270-6474/14/3412865-12$15.00/0.

  18. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  19. Integrating Temporal and Spatial Scales: Human Structural Network Motifs Across Age and Region of Interest Size

    Science.gov (United States)

    Echtermeyer, Christoph; Han, Cheol E.; Rotarska-Jagiela, Anna; Mohr, Harald; Uhlhaas, Peter J.; Kaiser, Marcus

    2011-01-01

    Human brain networks can be characterized at different temporal or spatial scales given by the age of the subject or the spatial resolution of the neuroimaging method. Integration of data across scales can only be successful if the combined networks show a similar architecture. One way to compare networks is to look at spatial features, based on fiber length, and topological features of individual nodes where outlier nodes form single node motifs whose frequency yields a fingerprint of the network. Here, we observe how characteristic single node motifs change over age (12–23 years) and network size (414, 813, and 1615 nodes) for diffusion tensor imaging structural connectivity in healthy human subjects. First, we find the number and diversity of motifs in a network to be strongly correlated. Second, comparing different scales, the number and diversity of motifs varied across the temporal (subject age) and spatial (network resolution) scale: certain motifs might only occur at one spatial scale or for a certain age range. Third, regions of interest which show one motif at a lower resolution may show a range of motifs at a higher resolution which may or may not include the original motif at the lower resolution. Therefore, both the type and localization of motifs differ for different spatial resolutions. Our results also indicate that spatial resolution has a higher effect on topological measures whereas spatial measures, based on fiber lengths, remain more comparable between resolutions. Therefore, spatial resolution is crucial when comparing characteristic node fingerprints given by topological and spatial network features. As node motifs are based on topological and spatial properties of brain connectivity networks, these conclusions are also relevant to other studies using connectome analysis. PMID:21811454

  20. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  1. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. An objective measure of auditory stream segregation based on molecular psychophysics.

    Science.gov (United States)

    Oberfeld, Daniel

    2014-04-01

    Auditory stream segregation is an important paradigm in the study of auditory scene analysis. Performance-based measures of auditory stream segregation have received increasing use as a complement to subjective reports of streaming. For example, the sensitivity in discriminating a temporal shift imposed on one B tone in an ABA sequence consisting of A and B tones that differ in frequency is often used to infer the perceptual organization (one stream vs. two streams). Limitations of these measures are discussed here, and an alternative measure based on the combination of decision weights and sensitivity is suggested. In the experiment, for ABA and ABB sequences varying in tempo (fast/slow) and duration (long/short), the sensitivity (d') in the temporal shift discrimination task did not differ between fast and slow sequences, despite strong differences in perceptual organization. The decision weights assigned to within-stream and between-stream interonset intervals also deviated from the idealized pattern of near-exclusive reliance on between-stream information in the subjectively integrated case, and on within-stream information in the subjectively segregated case. However, an estimate of internal noise computed using a combination of the estimated decision weights and sensitivity differentiated between sequences that were predominantly perceived as integrated or segregated, with significantly higher internal noise estimates for the segregated case. Therefore, the method of using a combination of decision weights and sensitivity provides a measure of auditory stream segregation that overcomes some of the limitations of purely sensitivity-based measures.

  3. Spatio-Temporal Data Model for Integrating Evolving Nation-Level Datasets

    Science.gov (United States)

    Sorokine, A.; Stewart, R. N.

    2017-10-01

    Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  4. SPATIO-TEMPORAL DATA MODEL FOR INTEGRATING EVOLVING NATION-LEVEL DATASETS

    Directory of Open Access Journals (Sweden)

    A. Sorokine

    2017-10-01

    Full Text Available Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc. and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets. Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  5. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  6. Role of DARPP-32 and ARPP-21 in the Emergence of Temporal Constraints on Striatal Calcium and Dopamine Integration

    Science.gov (United States)

    Bhalla, Upinder S.; Hellgren Kotaleski, Jeanette

    2016-01-01

    In reward learning, the integration of NMDA-dependent calcium and dopamine by striatal projection neurons leads to potentiation of corticostriatal synapses through CaMKII/PP1 signaling. In order to elicit the CaMKII/PP1-dependent response, the calcium and dopamine inputs should arrive in temporal proximity and must follow a specific (dopamine after calcium) order. However, little is known about the cellular mechanism which enforces these temporal constraints on the signal integration. In this computational study, we propose that these temporal requirements emerge as a result of the coordinated signaling via two striatal phosphoproteins, DARPP-32 and ARPP-21. Specifically, DARPP-32-mediated signaling could implement an input-interval dependent gating function, via transient PP1 inhibition, thus enforcing the requirement for temporal proximity. Furthermore, ARPP-21 signaling could impose the additional input-order requirement of calcium and dopamine, due to its Ca2+/calmodulin sequestering property when dopamine arrives first. This highlights the possible role of phosphoproteins in the temporal aspects of striatal signal transduction. PMID:27584878

  7. Role of DARPP-32 and ARPP-21 in the Emergence of Temporal Constraints on Striatal Calcium and Dopamine Integration.

    Directory of Open Access Journals (Sweden)

    Anu G Nair

    2016-09-01

    Full Text Available In reward learning, the integration of NMDA-dependent calcium and dopamine by striatal projection neurons leads to potentiation of corticostriatal synapses through CaMKII/PP1 signaling. In order to elicit the CaMKII/PP1-dependent response, the calcium and dopamine inputs should arrive in temporal proximity and must follow a specific (dopamine after calcium order. However, little is known about the cellular mechanism which enforces these temporal constraints on the signal integration. In this computational study, we propose that these temporal requirements emerge as a result of the coordinated signaling via two striatal phosphoproteins, DARPP-32 and ARPP-21. Specifically, DARPP-32-mediated signaling could implement an input-interval dependent gating function, via transient PP1 inhibition, thus enforcing the requirement for temporal proximity. Furthermore, ARPP-21 signaling could impose the additional input-order requirement of calcium and dopamine, due to its Ca2+/calmodulin sequestering property when dopamine arrives first. This highlights the possible role of phosphoproteins in the temporal aspects of striatal signal transduction.

  8. Modulation of effective connectivity during vocalization with perturbed auditory feedback

    Science.gov (United States)

    Parkinson, Amy L.; Korzyukov, Oleg; Larson, Charles R.; Litvak, Vladimir; Robin, Donald A.

    2013-01-01

    The integration of auditory feedback with vocal motor output is important for the control of voice fundamental frequency (F0). We used a pitch-shift paradigm where subjects respond to an alteration, or shift, of voice pitch auditory feedback with a reflexive change in F0. We presented varying magnitudes of pitch shifted auditory feedback to subjects during vocalization and passive listening and measured event related potentials (ERP’s) to the feedback shifts. Shifts were delivered at +100 and +400 cents (200 ms duration). The ERP data were modeled with Dynamic Causal Modeling (DCM) techniques where the effective connectivity between the superior temporal gyrus (STG), inferior frontal gyrus and premotor areas were tested. We compared three main factors; the effect of intrinsic STG connectivity, STG modulation across hemispheres and the specific effect of hemisphere. A Bayesian model selection procedure was used to make inference about model families. Results suggest that both intrinsic STG and left to right STG connections are important in the identification of self-voice error and sensory motor integration. We identified differences in left to right STG connections between 100 cent and 400 cent shift conditions suggesting that self and non-self voice error are processed differently in the left and right hemisphere. These results also highlight the potential of DCM modeling of ERP responses to characterize specific network properties of forward models of voice control. PMID:23665378

  9. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  10. Resolução temporal de crianças: comparação entre audição normal, perda auditiva condutiva e distúrbio do processamento auditivo Temporal resolution in children: comparing normal hearing, conductive hearing loss and auditory processing disorder

    Directory of Open Access Journals (Sweden)

    Sheila Andreoli Balen

    2009-02-01

    Full Text Available A resolução temporal é essencial na percepção acústica da fala, podendo estar alterada nos distúrbios auditivos gerando prejuízos no desenvolvimento da linguagem. OBJETIVO: Comparar a resolução temporal de crianças com audição normal, perda auditiva condutiva e distúrbios do processamento auditivo. CASUÍSTICA E MÉTODO: A amostra foi de 31 crianças de 07 a 10 anos, divididas em três grupos: G1: 12 com audição normal, G2: sete com perda auditiva condutiva e G3: 12 com distúrbio do processamento auditivo. Os procedimentos de seleção foram: questionário aos responsáveis, avaliação audiológica e do processamento auditivo. O procedimento de pesquisa foi o teste de detecção de intervalos no silêncio realizado a 50 dB NS acima da média de 500, 1000 e 2000Hz na condição binaural em 500, 1000, 2000 e 4000Hz. Na análise dos dados foi utilizado o Teste de Wilcoxon, com nível de significância de 1%. RESULTADO: Observou-se que houve diferença entre os G1 e G2 e entre os G1 e G3 em todas as freqüências. Por outro lado, esta diferença não foi observada entre os G2 e G3. CONCLUSÃO A perda auditiva condutiva e o distúrbio do processamento auditivo têm influência no limiar de detecção de intervalos.Temporal resolution is essential to speech acoustic perception. It may be altered in subjects with auditory disorders, thus impairing the development of spoken and written language. AIM: The goal was to compare temporal resolution of children with normal hearing, with those bearing conductive hearing loss and auditory processing disorders. MATERIALS AND METHODS: The sample had 31 children, between 7 and 10 years of age, broken down into three groups: G1: 12 subjects with normal hearing; G2: 7 with conductive hearing loss and G3: 12 subjects with auditory processing disorders. This study was clinical and experimental. Selection procedures were: a questionnaire to be answered by the parents/guardians, audiologic and hearing

  11. Effect of conductive hearing loss on central auditory function

    Directory of Open Access Journals (Sweden)

    Arash Bayat

    Full Text Available Abstract Introduction: It has been demonstrated that long-term Conductive Hearing Loss (CHL may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP. It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. Objective: This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. Methods: During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control, aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. Results: The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p = 0.004; left: p 0.05. Conclusion: The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended.

  12. Visual and Auditory Synchronization Deficits Among Dyslexic Readers as Compared to Non-impaired Readers: A Cross-Correlation Algorithm Analysis

    Directory of Open Access Journals (Sweden)

    Itamar eSela

    2014-06-01

    Full Text Available Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing gap (Asynchrony between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time procedure where participants were asked to identify whhether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, reaction time, and Event Related Potential (ERP measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal speed of processing of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170- 240 ms after stimulus presentation, increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal speed of processing of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia.

  13. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  14. Processamento auditivo: comparação entre potenciais evocados auditivos de média latência e testes de padrões temporais Auditory processing: comparision between auditory middle latency response and temporal pattern tests

    Directory of Open Access Journals (Sweden)

    Eliane Schochat

    2009-06-01

    Full Text Available OBJETIVO: verificar a concordância entre os resultados da avaliação do Potencial Evocado Auditivo de Média Latência e testes de padrões temporais. MÉTODOS: foram avaliados 155 sujeitos de ambos os sexos, idade entre sete e 16 anos, com audição periférica normal. Os sujeitos foram submetidos aos testes de Padrão de Frequência e Duração e Potenciais Evocados auditivos de Média Latência. RESULTADOS: os sujeitos foram distribuídos em dois grupos: normal ou alterado para o processamento auditivo. O índice de alteração foi em torno de 30%, exceto para Potencial Evocado Auditivo de Média Latência que foi pouco menor (17,4%. Os padrões de frequência e duração foram concordantes até 12 anos. A partir dos 13 anos, observou-se maior ocorrência de alteração no padrão de frequência que no padrão de duração. Os padrões de frequência e duração (orelhas direita e esquerda e Potencial Evocado Auditivo de Média Latência não foram concordantes. Para 7 e 8 anos a combinação padrão de frequência e duração normal / Média Latência alterado tem maior ocorrência que a combinação padrão de frequência e duração alterada / Média Latência normal. Nas demais idades, ocorreu o contrário. Não houve diferença estatística entre as faixas etárias quanto à distribuição de normal e alterado no padrão de frequência (orelhas direita e esquerda, nem para o Potencial Evocado Auditivo de Média Latência, com exceção do padrão de duração para o grupo de 9 e 10 anos. CONCLUSÃO: não houve concordância entre os resultados do Potencial Evocado Auditivo de Média Latência e os testes de padrões temporais aplicados.PURPOSE: to check the concordance between the Middle Latency Response and temporal processing tests. METHODS: 155 normal hearing subjects of both genders (age group range between 7 to 16 years were evaluated with the Pitch and Duration Pattern Tests (behavioral and Middle Latency Response

  15. Representing Representation: Integration between the Temporal Lobe and the Posterior Cingulate Influences the Content and Form of Spontaneous Thought.

    Directory of Open Access Journals (Sweden)

    Jonathan Smallwood

    Full Text Available When not engaged in the moment, we often spontaneously represent people, places and events that are not present in the environment. Although this capacity has been linked to the default mode network (DMN, it remains unclear how interactions between the nodes of this network give rise to particular mental experiences during spontaneous thought. One hypothesis is that the core of the DMN integrates information from medial and lateral temporal lobe memory systems, which represent different aspects of knowledge. Individual differences in the connectivity between temporal lobe regions and the default mode network core would then predict differences in the content and form of people's spontaneous thoughts. This study tested this hypothesis by examining the relationship between seed-based functional connectivity and the contents of spontaneous thought recorded in a laboratory study several days later. Variations in connectivity from both medial and lateral temporal lobe regions was associated with different patterns of spontaneous thought and these effects converged on an overlapping region in the posterior cingulate cortex. We propose that the posterior core of the DMN acts as a representational hub that integrates information represented in medial and lateral temporal lobe and this process is important in determining the content and form of spontaneous thought.

  16. Multisensory speech perception without the left superior temporal sulcus.

    Science.gov (United States)

    Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S

    2012-09-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  18. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  19. [Verbal auditory agnosia: SPECT study of the brain].

    Science.gov (United States)

    Carmona, C; Casado, I; Fernández-Rojas, J; Garín, J; Rayo, J I

    1995-01-01

    Verbal auditory agnosia are rare in clinical practice. Clinically, it characterized by impairment of comprehension and repetition of speech but reading, writing, and spontaneous speech are preserved. So it is distinguished from generalized auditory agnosia by the preserved ability to recognize non verbal sounds. We present the clinical picture of a forty-years-old, right handed woman who developed verbal auditory agnosic after an bilateral temporal ischemic infarcts due to atrial fibrillation by dilated cardiomyopathie. Neurophysiological studies by pure tone threshold audiometry: brainstem auditory evoked potentials and cortical auditory evoked potentials showed sparing of peripheral hearing and intact auditory pathway in brainstem but impaired cortical responses. Cranial CT-SCAN revealed two large hypodenses area involving both cortico-subcortical temporal lobes. Cerebral SPECT using 99mTc-HMPAO as radiotracer showed hypoperfusion just posterior in both frontal lobes nect to Roland's fissure and at level of bitemporal lobes just anterior to Sylvian's fissure.

  20. Attention in Older Adults: A Normative Study of the Integrated Visual and Auditory Continuous Performance Test for Persons Aged 70 Years.

    Science.gov (United States)

    Berginström, Nils; Johansson, Jonas; Nordström, Peter; Nordström, Anna

    2015-01-01

    Our objective was to present normative data from 70-year-olds on the Integrated Visual and Auditory Continuous Performance Test (IVA), a computerized measure of attention and response control. 640 participants (330 men and 310 women), all aged 70 years, completed the IVA, as well as the Mini-Mental State Examination and the Geriatric Depression Scale. Data were stratified by education and gender. Education differences were found in 11 of 22 IVA scales. Minor gender differences were found in six scales for the high-education group, and two scales for the low-education group. Comparisons of healthy participants and participants with stroke, myocardial infarction, or diabetes showed only minor differences. Correlations among IVA scales were strong (all r > .34, p < .001), and those with the widely used Mini-Mental State Examination were weaker (all r < .21, p < .05). Skewed distributions of normative data from primary IVA scales measuring response inhibition (Prudence) and inattention (Vigilance) represent a weakness of this test. This study provides IVA norms for 70-year-olds stratified by education and gender, increasing the usability of this instrument when testing persons near this age. The data presented here show some major differences from original IVA norms, and explanations for these differences are discussed. Explanations include the broad age-range used in the original IVA norms (66-99 years of age) and the passage of 15 years since the original norms were collected.

  1. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  2. Articulatory movements modulate auditory responses to speech.

    Science.gov (United States)

    Agnew, Z K; McGettigan, C; Banks, B; Scott, S K

    2013-06-01

    Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior-posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy Avaliação eletrofisiológica e comportamental da audição em individuos com epilepsia em lobo temporal esquerdo

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha

    2010-02-01

    Full Text Available The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE for subjects with left mesial temporal sclerosis (LMTS in relation to the behavioral test-Dichotic Digits Test (DDT, event-related potential (P300, and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.O objetivo deste estudo foi determinar a repercussão da epilepsia de lobo temporal esquerdo (LTE em indivíduos com esclerose mesial temporal esquerda (EMTE em relação à avaliação auditiva comportamental-Teste Dicótico de Dígitos (TDD, ao Potencial Evocado Auditivo de Longa Latência (P300 e comparar o P300 do lobo temporal esquerdo e direito. Estudamos 12 indivíduos com EMTE (grupo estudo e 12 indivíduos controle com desenvolvimento típico. Analisamos as relações entre a latência e amplitude do P300, obtidos nas posições C3A1,C3A2,C4A1 e C4A2 e os resultados obtidos no TDD. No TDD, o grupo estudo apresentou pior desempenho em relação ao grupo controle, sendo esta diferença estatisticamente significante em ambas as orelhas. Para o P300, observamos que em seis indivíduos com EMTE o potencial foi ausente. Para a latência e amplitude, verificamos que estes

  4. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; R�der, Brigitte

    2013-01-01

    they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG......, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent...

  5. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  6. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  7. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  8. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Auditory steady state cortical responses indicate deviant phonemic-rate processing in adults with dyslexia.

    Science.gov (United States)

    Poelmans, Hanne; Luts, Heleen; Vandermosten, Maaike; Boets, Bart; Ghesquière, Pol; Wouters, Jan

    2012-01-01

    Speech intelligibility is strongly influenced by the ability to process temporal modulations. It is hypothesized that in dyslexia, deficient processing of rapidly changing auditory information underlies a deficient development of phonological representations, causing reading and spelling problems. Low-frequency modulations between 4 and 20 Hz correspond to the processing rate of important phonological segments (syllables and phonemes, respectively) in speech and therefore provide a bridge between low-level auditory and phonological processing. In the present study, temporal modulation processing was investigated by auditory steady state responses (ASSRs) in normal-reading and dyslexic adults. Multichannel ASSRs were recorded in normal-reading and dyslexic adults in response to speech-weighted noise stimuli amplitude modulated at 80, 20, and 4 Hz. The 80 Hz modulation is known to be primarily generated by the brainstem, whereas the 20 and 4 Hz modulations are mainly generated in the cortex. Furthermore, the 20 and 4 Hz modulations provide an objective auditory performance measure related to phonemic- and syllabic-rate processing. In addition to neurophysiological measures, psychophysical tests of speech-in-noise perception and phonological awareness were assessed. On the basis of response strength and phase coherence measures, normal-reading and dyslexic participants showed similar processing at the brainstem level. At the cortical level of the auditory system, dyslexic subjects demonstrated deviant phonemic-rate responses compared with normal readers, whereas no group differences were found for the syllabic rate. Furthermore, a relationship between phonemic-rate ASSRs and psychophysical tests of speech-in-noise perception and phonological awareness was obtained. The results suggest reduced cortical processing for phonemic-rate modulations in dyslexic adults, presumably resulting in limited integration of temporal information in the dorsal phonological pathway.

  10. Geo-Parcel Based Crop Identification by Integrating High Spatial-Temporal Resolution Imagery from Multi-Source Satellite Data

    Directory of Open Access Journals (Sweden)

    Yingpin Yang

    2017-12-01

    Full Text Available Geo-parcel based crop identification plays an important role in precision agriculture. It meets the needs of refined farmland management. This study presents an improved identification procedure for geo-parcel based crop identification by combining fine-resolution images and multi-source medium-resolution images. GF-2 images with fine spatial resolution of 0.8 m provided agricultural farming plot boundaries, and GF-1 (16 m and Landsat 8 OLI data were used to transform the geo-parcel based enhanced vegetation index (EVI time-series. In this study, we propose a piecewise EVI time-series smoothing method to fit irregular time profiles, especially for crop rotation situations. Global EVI time-series were divided into several temporal segments, from which phenological metrics could be derived. This method was applied to Lixian, where crop rotation was the common practice of growing different types of crops, in the same plot, in sequenced seasons. After collection of phenological features and multi-temporal spectral information, Random Forest (RF was performed to classify crop types, and the overall accuracy was 93.27%. Moreover, an analysis of feature significance showed that phenological features were of greater importance for distinguishing agricultural land cover compared to temporal spectral information. The identification results indicated that the integration of high spatial-temporal resolution imagery is promising for geo-parcel based crop identification and that the newly proposed smoothing method is effective.

  11. Spectral and temporal properties of long GRBs detected by INTEGRAL from 3 keV to 8 MeV

    DEFF Research Database (Denmark)

    Martin-Carrillo, A.; Topinka, M.; Hanlon, L.

    2010-01-01

    study of the spectral and temporal evolution of a subset of 7 INTEGRAL g – ray bursts across a wide energy range from 3 keV to 8MeV has been carried out. This GRB sample is characterised by long multi-peaked bursts that are bright in the JEM-X energy range and encompass X-ray rich bursts, X-ray flashes...... and classical GRBs. We report the detection of X-ray prompt and afterglow emission from GRB 041219A and GRB081003A with JEM-X for the first time. At least two temporal breaks have been identified in the X-ray afterglow light curve of GRB 081003A. These results demonstrate INTEGRAL’s broadband capabilities...

  12. When Spatial and Temporal Contiguities Help the Integration in Working Memory: "A Multimedia Learning" Approach

    Science.gov (United States)

    Mammarella, Nicola; Fairfield, Beth; Di Domenico, Alberto

    2013-01-01

    Two experiments examined the effects of spatial and temporal contiguities in a working memory binding task that required participants to remember coloured objects. In Experiment 1, a black and white drawing and a corresponding phrase that indicated its colour perceptually were either near or far (spatial study condition), while in Experiment 2,…

  13. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  14. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  15. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  16. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  17. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    KAUST Repository

    Serag, Maged F.

    2014-10-06

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.

  18. The effects of rhythm and melody on auditory stream segregation.

    Science.gov (United States)

    Szalárdy, Orsolya; Bendixen, Alexandra; Böhm, Tamás M; Davies, Lucy A; Denham, Susan L; Winkler, István

    2014-03-01

    While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.

  19. Integration of spatio-temporal contrast sensitivity with a multi-slice channelized Hotelling observer

    Science.gov (United States)

    Avanaki, Ali N.; Espig, Kathryn S.; Marchessoux, Cedric; Krupinski, Elizabeth A.; Bakic, Predrag R.; Kimpe, Tom R. L.; Maidment, Andrew D. A.

    2013-03-01

    Barten's model of spatio-temporal contrast sensitivity function of human visual system is embedded in a multi-slice channelized Hotelling observer. This is done by 3D filtering of the stack of images with the spatio-temporal contrast sensitivity function and feeding the result (i.e., the perceived image stack) to the multi-slice channelized Hotelling observer. The proposed procedure of considering spatio-temporal contrast sensitivity function is generic in the sense that it can be used with observers other than multi-slice channelized Hotelling observer. Detection performance of the new observer in digital breast tomosynthesis is measured in a variety of browsing speeds, at two spatial sampling rates, using computer simulations. Our results show a peak in detection performance in mid browsing speeds. We compare our results to those of a human observer study reported earlier (I. Diaz et al. SPIE MI 2011). The effects of display luminance, contrast and spatial sampling rate, with and without considering foveal vision, are also studied. Reported simulations are conducted with real digital breast tomosynthesis image stacks, as well as stacks from an anthropomorphic software breast phantom (P. Bakic et al. Med Phys. 2011). Lesion cases are simulated by inserting single micro-calcifications or masses. Limitations of our methods and ways to improve them are discussed.

  20. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  1. Integrated Use of Multi-temporal SAR and Optical Satellite Imagery for Crop Mapping in Ukraine

    Science.gov (United States)

    Lavreniuk, M. S.; Kussul, N.; Skakun, S.

    2014-12-01

    Information on location and spatial distribution of crops is extremely important within many applications such as crop area estimation, crop yield forecasting and environmental impact analysis [1-2]. Synthetic-aperture radar (SAR) instruments on board remote sensing satellites offer unique features to imaging crops due to their all weather capabilities and ability to capture crop characteristics not available by optical instruments. This abstract aims to explore feasibility and the use of multi-temporal multi-polarization SAR images along with multi-temporal optical images to crop classification in Ukraine using a neural network ensemble. The study area included a JECAM test site in Ukraine which is a part of the Global Agriculture Monitoring (GEOGLAM) initiative. Six optical images were acquired by Landsat-8, and twelve SAR images were acquired by Radarsat-2 (six in FQ8W mode with angle 28 deg., and FQ20W with angle 40 deg.) over the study region. Optical images were atmospherically corrected. SAR images were filtered for speckle, and converted to backscatter coefficients. Ground truth data on crop type (274 polygons) were collected during the summer of 2013. In order to perform supervised classification of multi-temporal satellite imagery, an ensemble of neural networks, in particular multi-layer perceptrons (MLPs), was used. The use of the ensemble allowed us to improve overall (OA) classification accuracy from +0.1% to +2% comparing to an individual network. Adding multi-temporal SAR images to multi-temporal optical images improved both OA and individual class accuracies, in particular for sunflower (gains up to +25.9%), soybeans (+16.2%), and maize (+6.2%). It was also found that better OA can be obtained using shallow angle (FQ20W, 40°) OA=77% over steeper angle (FQ8W, 28°) OA=71.78%. 1. F. Kogan et al., "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," Int. J. Appl. Earth Observ. Geoinform

  2. An interactive model of auditory-motor speech perception.

    Science.gov (United States)

    Liebenthal, Einat; Möttönen, Riikka

    2017-12-18

    Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Temporal dynamics of L5 dendrites in medial prefrontal cortex regulate integration versus coincidence detection of afferent inputs.

    Science.gov (United States)

    Dembrow, Nikolai C; Zemelman, Boris V; Johnston, Daniel

    2015-03-18

    Distinct brain regions are highly interconnected via long-range projections. How this inter-regional communication occurs depends not only upon which subsets of postsynaptic neurons receive input, but also, and equally importantly, upon what cellular subcompartments the projections target. Neocortical pyramidal neurons receive input onto their apical dendrites. However, physiological characterization of these inputs thus far has been exclusively somatocentric, leaving how the dendrites respond to spatial and temporal patterns of input unexplored. Here we used a combination of optogenetics with multisite electrode recordings to simultaneously measure dendritic and somatic responses to afferent fiber activation in two different populations of layer 5 (L5) pyramidal neurons in the rat medial prefrontal cortex (mPFC). We found that commissural inputs evoked monosynaptic responses in both intratelencephalic (IT) and pyramidal tract (PT) dendrites, whereas monosynaptic hippocampal input primarily targeted IT, but not PT, dendrites. To understand the role of dendritic integration in the processing of long-range inputs, we used dynamic clamp to simulate synaptic currents in the dendrites. IT dendrites functioned as temporal integrators that were particularly responsive to dendritic inputs within the gamma frequency range (40-140 Hz). In contrast, PT dendrites acted as coincidence detectors by responding to spatially distributed signals within a narrow time window. Thus, the PFC extracts information from different brain regions through the combination of selective dendritic targeting and the distinct dendritic physiological properties of L5 pyramidal dendrites. Copyright © 2015 the authors 0270-6474/15/354501-14$15.00/0.

  4. Integration of Organic Electrochemical and Field-Effect Transistors for Ultraflexible, High Temporal Resolution Electrophysiology Arrays.

    Science.gov (United States)

    Lee, Wonryung; Kim, Dongmin; Rivnay, Jonathan; Matsuhisa, Naoji; Lonjaret, Thomas; Yokota, Tomoyuki; Yawo, Hiromu; Sekino, Masaki; Malliaras, George G; Someya, Takao

    2016-11-01

    Integration of organic electrochemical transistors and organic field-effect transistors is successfully realized on a 600 nm thick parylene film toward an electrophysiology array. A single cell of an integrated device and a 2 × 2 electrophysiology array succeed in detecting electromyogram with local stimulation of the motor nerve bundle of a transgenic rat by a laser pulse. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Characterizing functional integrity: intraindividual brain signal variability predicts memory performance in patients with medial temporal lobe epilepsy.

    Science.gov (United States)

    Protzner, Andrea B; Kovacevic, Natasa; Cohn, Melanie; McAndrews, Mary Pat

    2013-06-05

    Computational modeling suggests that variability in brain signals provides important information regarding the system's capacity to adopt different network configurations that may promote optimal responding to stimuli. Although there is limited empirical work on this construct, a recent study indicates that age-related decreases in variability across the adult lifespan correlate with less efficient and less accurate performance. Here, we extend this construct to the assessment of cerebral integrity by comparing fMRI BOLD variability and fMRI BOLD amplitude in their ability to account for differences in functional capacity in patients with focal unilateral medial temporal dysfunction. We were specifically interested in whether either of these BOLD measures could identify a link between the affected medial temporal region and memory performance (as measured by a clinical test of verbal memory retention). Using partial least-squares analyses, we found that variability in a set of regions including the left hippocampus predicted verbal retention and, furthermore, this relationship was similar across a range of cognitive tasks measured during scanning (i.e., the same pattern was seen in fixation, autobiographical recall, and word generation). In contrast, signal amplitude in the hippocampus did not predict memory performance, even for a task that reliably activates the medial temporal lobes (i.e., autobiographical recall). These findings provide a powerful validation of the concept that variability in brain signals reflects functional integrity. Furthermore, this measure can be characterized as a robust biomarker in this clinical setting because it reveals the same pattern regardless of cognitive challenge or task engagement during scanning.

  6. Reduced auditory segmentation potentials in first-episode schizophrenia.

    Science.gov (United States)

    Coffman, Brian A; Haigh, Sarah M; Murphy, Timothy K; Leiter-Mcbeth, Justin; Salisbury, Dean F

    2017-10-22

    Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination). Both ASP and N2 modulation are impaired in long-term schizophrenia. To determine whether these deficits are present early in disease course, we compared ASP and N2 modulation between individuals at their first episode of psychosis within the schizophrenia spectrum (FE, N=20) and matched healthy controls (N=24). The ASP was reduced by >40% in FE; however, N2 modulation was not statistically different from HC. This suggests that auditory segmentation (ASP) deficits exist at this early stage of schizophrenia, but auditory edge detection (N2 modulation) is relatively intact. In a subset of subjects for whom structural MRIs were available (N=14 per group), ASP sources were localized to midcingulate cortex (MCC) and temporal auditory cortex. Neurophysiological activity in FE was reduced in MCC, an area linked to aberrant perceptual organization, negative symptoms, and cognitive dysfunction in schizophrenia, but not temporal auditory cortex. This study supports the validity of the ASP for measurement of auditory object segmentation and suggests that the ASP may be useful as an early index of schizophrenia-related MCC dysfunction. Further, ASP deficits may serve as a viable biomarker of disease presence. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...... no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...

  8. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    Directory of Open Access Journals (Sweden)

    Vibhakar C Kotak

    2015-08-01

    Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.

  9. Contributions of cerebellar event-based temporal processing and preparatory function to speech perception.

    Science.gov (United States)

    Schwartze, Michael; Kotz, Sonja A

    2016-10-01

    The role of the cerebellum in the anatomical and functional architecture of the brain is a matter of ongoing debate. We propose that cerebellar temporal processing contributes to speech perception on a number of accounts: temporally precise cerebellar encoding and rapid transmission of an event-based representation of the temporal structure of the speech signal serves to prepare areas in the cerebral cortex for the subsequent perceptual integration of sensory information. As speech dynamically evolves in time this fundamental preparatory function may extend its scope to the predictive allocation of attention in time and supports the fine-tuning of temporally specific models of the environment. In this framework, an oscillatory account considering a range of frequencies may best serve the linking of the temporal and speech processing systems. Lastly, the concerted action of these processes may not only advance predictive adaptation to basic auditory dynamics but optimize the perceptual integration of speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

    Science.gov (United States)

    Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.

    2008-01-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…

  11. Auditory Risk of Air Rifles

    Science.gov (United States)

    Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.

    2016-01-01

    Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923

  12. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  13. Spatio-Temporal LAI Modelling by Integrating Climate and MODIS LAI Data in a Mesoscale Catchment

    Directory of Open Access Journals (Sweden)

    Liya Sun

    2017-02-01

    Full Text Available Vegetation is often represented by the leaf area index (LAI in many ecological, hydrological and meteorological land surface models. However, the spatio-temporal dynamics of the vegetation are important to represent in these models. While the widely applied methods, such as the Canopy Structure Dynamic Model (CSDM and the Double Logistic Model (DLM are solely based on cumulative daily mean temperature data as input, a new spatio-temporal LAI prediction model referred to as the Temperature Precipitation Vegetation Model (TPVM is developed that also considers cumulative precipitation data as input into the modelling process. TPVM as well as CDSM and DLM model performances are compared and evaluated against filtered LAI data from the Moderate Resolution Imaging Spectroradiometer (MODIS. The calibration/validation results of a cross-validation performed in the meso-scale Attert catchment in Luxembourg indicated that the DLM and TPVM generally provided more realistic and accurate LAI data. The TPVM performed superiorly for the agricultural land cover types compared to the other two models, which only used the temperature data. The Pearson's correlation coefficient (CC between TPVM and the field measurement is 0.78, compared to 0.73 and 0.69 for the DLM and CSDM, respectively. The phenological metrics were derived from the TPVM model to investigate the interaction between the climate variables and the LAI variations. These interactions illustrated the dominant control of temperature on the LAI dynamics for deciduous forest cover, and a combined influence of temperature with precipitation for the agricultural land use areas.

  14. Asymmetric excitatory synaptic dynamics underlie interaural time difference processing in the auditory system.

    Directory of Open Access Journals (Sweden)

    Pablo E Jercog

    2010-06-01

    Full Text Available Low-frequency sound localization depends on the neural computation of interaural time differences (ITD and relies on neurons in the auditory brain stem that integrate synaptic inputs delivered by the ipsi- and contralateral auditory pathways that start at the two ears. The first auditory neurons that respond selectively to ITD are found in the medial superior olivary nucleus (MSO. We identified a new mechanism for ITD coding using a brain slice preparation that preserves the binaural inputs to the MSO. There was an internal latency difference for the two excitatory pathways that would, if left uncompensated, position the ITD response function too far outside the physiological range to be useful for estimating ITD. We demonstrate, and support using a biophysically based computational model, that a bilateral asymmetry in excitatory post-synaptic potential (EPSP slopes provides a robust compensatory delay mechanism due to differential activation of low threshold potassium conductance on these inputs and permits MSO neurons to encode physiological ITDs. We suggest, more generally, that the dependence of spike probability on rate of depolarization, as in these auditory neurons, provides a mechanism for temporal order discrimination between EPSPs.

  15. Auditory based neuropsychology in neurosurgery.

    Science.gov (United States)

    Wester, Knut

    2008-04-01

    In this article, an account is given on the author's experience with auditory based neuropsychology in a clinical, neurosurgical setting. The patients that were included in the studies are patients with traumatic or vascular brain lesions, patients undergoing brain surgery to alleviate symptoms of Parkinson's disease, or patients harbouring an intracranial arachnoid cyst affecting the temporal or the frontal lobe. The aims of these investigations were to collect information about the location of cognitive processes in the human brain, or to disclose dyscognition in patients with an arachnoid cyst. All the patients were tested with the DL technique. In addition, the cyst patients were subjected to a number of non-auditory, standard neuropsychological tests, such as Benton Visual Retention Test, Street Gestalt Test, Stroop Test and Trails Test A and B. The neuropsychological tests revealed that arachnoid cysts in general cause dyscognition that also includes auditory processes, and more importantly, that these cognition deficits normalise after surgical removal of the cyst. These observations constitute strong evidence in favour of surgical decompression.

  16. Lateralization of auditory-cortex functions.

    Science.gov (United States)

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding.

  17. Auditory-motor coupling affects phonetic encoding.

    Science.gov (United States)

    Schmidt-Kassow, Maren; Thöne, Katharina; Kaiser, Jochen

    2017-11-27

    Recent studies have shown that moving in synchrony with auditory stimuli boosts attention allocation and verbal learning. Furthermore rhythmic tones are processed more efficiently than temporally random tones ('timing effect'), and this effect is increased when participants actively synchronize their motor performance with the rhythm of the tones, resulting in auditory-motor synchronization. Here, we investigated whether this applies also to sequences of linguistic stimuli (syllables). We compared temporally irregular syllable sequences with two temporally regular conditions where either the interval between syllable onsets (stimulus onset asynchrony, SOA) or the interval between the syllables' vowel onsets was kept constant. Entrainment to the stimulus presentation frequency (1 Hz) and event-related potentials were assessed in 24 adults who were instructed to detect pre-defined deviant syllables while they either pedaled or sat still on a stationary exercise bike. We found larger 1 Hz entrainment and P300 amplitudes for the SOA presentation during motor activity. Furthermore, the magnitude of the P300 component correlated with the motor variability in the SOA condition and 1 Hz entrainment, while in turn 1 Hz entrainment correlated with auditory-motor synchronization performance. These findings demonstrate that acute auditory-motor coupling facilitates phonetic encoding. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented...... as a gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....

  19. Integrating Real-time and Manual Monitored Soil Moisture Data to Predict Hillslope Soil Moisture Variations with High Temporal Resolutions

    Science.gov (United States)

    Zhu, Qing; Lv, Ligang; Zhou, Zhiwen; Liao, Kaihua

    2016-04-01

    Spatial-temporal variability of soil moisture 15 has been remaining an challenge to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time soil moisture monitoring methods. This restricted the comprehensive and intensive examination of soil moisture dynamics. In this study, we aimed to integrate the manual and real-time monitored soil moisture to depict the hillslope dynamics of soil moisture with good spatial coverage and temporal resolution. Linear (stepwise multiple linear regression-SMLR) and non-linear models (support vector machines-SVM) were used to predict soil moisture at 38 manual sites (collected 1-2 times per month) with soil moisture automatically collected at three real-time monitoring sites (collected every 5 mins). By comparing the accuracies of SMLR and SVM for each manual site, optimal soil moisture prediction model of this site was then determined. Results show that soil moisture at these 38 manual sites can be reliably predicted (root mean square errorsindex, profile curvature, and relative difference of soil moisture and its standard deviation influenced the selection of prediction model since they related to the dynamics of soil water distribution and movement. By using this approach, hillslope soil moisture spatial distributions at un-sampled times and dates were predicted after a typical rainfall event. Missing information of hillslope soil moisture dynamics was then acquired successfully. This can be benefit for determining the hot spots and moments of soil water movement, as well as designing the proper soil moisture monitoring plan at the field scale.

  20. Sensitivity and specificity of auditory steady-state response testing

    Directory of Open Access Journals (Sweden)

    Camila Maia Rabelo

    2011-01-01

    Full Text Available INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady-state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady-state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz. The difference between auditory steady-state response-estimated thresholds and behavioral thresholds (audiometric evaluation was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady-state response-estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS lesions has shown that individuals with CANS lesions present a greater difference between ASSR-estimated thresholds and actual behavioral thresholds; ASSR-estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR-estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady

  1. The use of listening devices to ameliorate auditory deficit in children with autism.

    Science.gov (United States)

    Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna

    2014-02-01

    To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.

  2. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    Science.gov (United States)

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  3. From 3D to 4D: Integration of temporal information into CT angiography studies.

    Science.gov (United States)

    Haubenreisser, Holger; Bigdeli, Amir; Meyer, Mathias; Kremer, Thomas; Riester, Thomas; Kneser, Ulrich; Schoenberg, Stefan O; Henzler, Thomas

    2015-12-01

    CT angiography is the current clinical standard for the imaging many vascular illnesses. This is traditionally done with a single arterial contrast phase. However, advances in CT technology allow for a dynamic acquisition of the contrast bolus, thus adding temporal information to the examination. The aim of this article is to highlight the clinical possibilities of dynamic CTA using 2 examples. The accuracy of the detection and quantification of stenosis in patients with peripheral arterial occlusive disease, especially in stadium III and IV, is significantly improved when performing dynamic CTA examinations. The post-interventional follow-up of examinations of EVAR benefit from dynamic information, allowing for a higher sensitivity and specificity, as well as allowing more accurate classification of potential endoleaks. The described radiation dose for these dynamic examinations is low, but this can be further optimized by using lower tube voltages. There are a multitude of applications for dynamic CTA that need to be further explored in future studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. An Efficient and Examinable Illegal Fallow Fields Detecting Method with Spatio-Temporal Information Integration

    Science.gov (United States)

    Chang, Chia-Hao; Chu, Tzu-How

    2017-04-01

    To control the rice production and farm usage in Taiwan, Agriculture and Food Agency (AFA) has published a series of policies to subsidize farmers to plant different crops or to practice fallow science 1983. Because of no efficient and examinable mechanism to verify the fallow fields surveyed by township office, illegal fallow fields were still repeated each year. In this research, we used remote sensing images, GIS data of Fields, and application records of fallow fields to establish an illegal fallow fields detecting method in Yulin County in central Taiwan. This method included: 1. collected multi-temporal images from FS-2 or SPOT series with 4 time periods; 2. combined the application records and GIS data of fields to verify the location of fallow fields; 3. conducted ground truth survey and classified images with ISODATA and Maximum Likelihood Classification (MLC); 4. defined the land cover type of fallow fields by zonal statistic; 5. verified accuracy with ground truth; 6. developed potential illegal fallow fields survey method and benefit estimation. We use 190 fallow fields with 127 legal and 63 illegal as ground truth and accuracies of illegal fallow field interpretation in producer and user are 71.43% and 38.46%. If township office surveyed 117 classified illegal fallow fields, 45 of 63 illegal fallow fields will be detected. By using our method, township office can save 38.42% of the manpower to detect illegal fallow fields and receive an examinable 71.43% producer accuracy.

  5. Integrating spatial and temporal variability into the analysis of fish food web linkages in Tijuana Estuary.

    Energy Technology Data Exchange (ETDEWEB)

    West, Janelle M.; Williams, Greg D.; Madon, Sharook P.; Zedler, Joy B.

    2003-05-14

    Our understanding of fish feeding interactions at Tijuana Estuary was improved by incorporating estimates of spatial and temporal variability into diet analyses. We examined the stomach contents of 7 dominant species (n=579 total fish) collected between 1994 and 1999. General feeding patterns pooled over time produced a basic food web consisting of 3 major trophic levels: (1) primary consumers (Atherinops affinis, Mugil cephalus) that ingested substantial amounts of plant material and detritus; (2) benthic carnivores (Clevelandia ios, Hypsopsetta guttulata, Gillichthys mirabilis, and Fundulus parvipinnis) that ingested high numbers of calanoid copepods and exotic amphipods (Grandidierella japonica); and (3) piscivores (Paralichthys californicus and Leptocottus armatus) that often preyed on smaller gobiids. Similarity-based groupings of individual species' diets were identified using nonmetric multidimensional scaling to characterize their variability within and between species, and in s pace and time. This allowed us to identify major shifts and recognize events (i.e., modified prey abundance during 1997-98 El Nino floods) that likely caused these shifts.

  6. Pediatric extratemporal epilepsy presenting with a complex auditory aura.

    Science.gov (United States)

    Clarke, Dave F; Boop, Frederick A; McGregor, Amy L; Perkins, F Frederick; Brewer, Vickie R; Wheless, James W

    2008-06-01

    Ear plugging (placing fingers in or covering the ears) is a clinical seizure semiology that has been described as a response to an unformed, auditory hallucination localized to the superior temporal neocortex. The localizing value of ear plugging in more complex auditory hallucinations may have more involved circuitry. We report on one child, whose aura was a more complex auditory phenomenon, consisting of a door opening and closing, getting louder as the ictus persisted. This child presented, at four years of age, with brief episodes of ear plugging followed by an acute emotional change that persisted until surgical resection of a left mesial frontal lesion at 11 years of age. Scalp video-EEG, magnetic resource imaging, magnetoencephalography, and invasive video-EEG monitoring were carried out. The scalp EEG changes always started after clinical onset. These were not localizing, and encompassed a wide field over the bi-frontal head regions, the left side predominant over the right. Intracranial video-EEG monitoring with subdural electrodes over both frontal and temporal regions localized the seizure-onset to the left mesial frontal lesion. The patient has remained seizure-free since the resection on June 28, 2006, approximately one and a half years ago. Ear plugging in response to simple auditory auras localize to the superior temporal gyrus. If the patient has more complex, formed auditory auras, not only may the secondary auditory areas in the temporal lobe be involved, but one has to entertain the possibility of ictal-onset from the frontal cortex.

  7. Severe auditory processing disorder secondary to viral meningoencephalitis.

    Science.gov (United States)

    Pillion, Joseph P; Shiffler, Dorothy E; Hoon, Alexander H; Lin, Doris D M

    2014-06-01

    To describe auditory function in an individual with bilateral damage to the temporal and parietal cortex. Case report. A previously healthy 17-year old male is described who sustained extensive cortical injury following an episode of viral meningoencephalitis. He developed status epilepticus and required intubation and multiple anticonvulsants. Serial brain MRIs showed bilateral temporoparietal signal changes reflecting extensive damage to language areas and the first transverse gyrus of Heschl on both sides. The patient was referred for assessment of auditory processing but was so severely impaired in speech processing that he was unable to complete any formal tests of his speech processing abilities. Audiological assessment utilizing objective measures of auditory function established the presence of normal peripheral auditory function and illustrates the importance of the use of objective measures of auditory function in patients with injuries to the auditory cortex. Use of objective measures of auditory function is essential in establishing the presence of normal peripheral auditory function in individuals with cortical damage who may not be able to cooperate sufficiently for assessment utilizing behavioral measures of auditory function.

  8. Integrating environmental equity, energy and sustainability: A spatial-temporal study of electric power generation

    Science.gov (United States)

    Touche, George Earl

    The theoretical scope of this dissertation encompasses the ecological factors of equity and energy. Literature important to environmental justice and sustainability are reviewed, and a general integration of global concepts is delineated. The conceptual framework includes ecological integrity, quality human development, intra- and inter-generational equity and risk originating from human economic activity and modern energy production. The empirical focus of this study concentrates on environmental equity and electric power generation within the United States. Several designs are employed while using paired t-tests, independent t-tests, zero-order correlation coefficients and regression coefficients to test seven sets of hypotheses. Examinations are conducted at the census tract level within Texas and at the state level across the United States. At the community level within Texas, communities that host coal or natural gas utility power plants and corresponding comparison communities that do not host such power plants are tested for compositional differences. Comparisons are made both before and after the power plants began operating for purposes of assessing outcomes of the siting process and impacts of the power plants. Relationships between the compositions of the hosting communities and the risks and benefits originating from the observed power plants are also examined. At the statewide level across the United States, relationships between statewide composition variables and risks and benefits originating from statewide electric power generation are examined. Findings indicate the existence of some limited environmental inequities, but they do not indicate disparities that confirm the general thesis of environmental racism put forth by environmental justice advocates. Although environmental justice strategies that would utilize Title VI of the 1964 Civil Rights Act and the disparate impact standard do not appear to be applicable, some findings suggest potential

  9. BAER - brainstem auditory evoked response

    Science.gov (United States)

    ... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.

  10. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  11. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  12. Assessing temporal uncertainties in integrated groundwater management: an opportunity for change?

    Science.gov (United States)

    Anglade, J. A.; Billen, G.; Garnier, J.

    2013-12-01

    Since the early 1990's, high levels of nitrates concentration (occasionally exceeding the European drinking standard of 50 mgNO3-/l) have been recorded in the borewells supplying Auxerres's 60.000 inhabitants water requirements. The water catchment area (86 km2) is located in a rural area dedicated to field crops production in intensive cereal farming systems based on massive inputs of synthetic fertilizers. In 1998, a co-management committee comprising Auxerre City, rural municipalities located in the water catchment area, consumers and farmers, was created as a forward-looking associative structure to achieve integrated, adaptive and sustainable management of the resource. In 2002, 18 years after the first signs of water quality degradation, multiparty negotiation led to a cooperative agreement, a contribution to assist farmers toward new practices (optimized application of fertilizers, catch crops, and buffer strips) in a form of a surcharge on consumers' water bills. The management strategy initially integrated and operating on a voluntary basis, did not rapidly deliver its promises (there was no significant decrease in the nitrates concentration). It evolved into a combination of short term palliative solutions, contractual and regulatory instruments with higher requirements. The establishment of a regulatory framework caused major tensions between stakeholders that brought about a feeling of discouragement and a lack of understanding as to the absence of results on water quality after 20 years of joint actions. At this point, the urban-rural solidarity was in danger in being undermined, so the time issue, i.e the delay between agricultural pressure changes and visible effects on water quality, was scientifically addressed and communicated to all the parties involved. First, water age dating analysis through CFC and SF6 (anthropic gas) coupled with a statistical long term analysis of agricultural evolutions revealed a residence time in the Sequanian limestones

  13. Perceptual Training Enhances Temporal Acuity for Multisensory Speech.

    Science.gov (United States)

    De Niear, Matthew A; Gupta, Pranjal B; Baum, Sarah H; Wallace, Mark T

    2017-10-28

    The temporal relationship between auditory and visual cues is a fundamental feature in the determination of whether these signals will be integrated. The window of perceived simultaneity (TBW) is a construct that describes the epoch of time during which asynchronous auditory and visual stimuli are likely to be perceptually bound. Recently, a number of studies have demonstrated the capacity for perceptual training to enhance temporal acuity for audiovisual stimuli (i.e., narrow the TBW). These studies, however, have only examined multisensory perceptual learning that develops in response to feedback that is provided when making judgments on simple, low-level audiovisual stimuli (i.e., flashes and beeps). Here we sought to determine if perceptual training was capable of altering temporal acuity for audiovisual speech. Furthermore, we also explored whether perceptual training with simple or complex audiovisual stimuli generalized across levels of stimulus complexity. Using a simultaneity judgment (SJ) task, we measured individuals' temporal acuity (as estimated by the TBW) prior to, immediately following, and one week after four consecutive days of perceptual training. We report that temporal acuity for audiovisual speech stimuli is enhanced following perceptual training using speech stimuli. Additionally, we find that changes in temporal acuity following perceptual training do not generalize across the levels of stimulus complexity in this study. Overall, the results suggest that perceptual training is capable of enhancing temporal acuity for audiovisual speech in adults, and that the dynamics of the changes in temporal acuity following perceptual training differ between simple audiovisual stimuli and more complex audiovisual speech stimuli. Copyright © 2017. Published by Elsevier Inc.

  14. Integrated remote sensing for multi-temporal analysis of urban land cover-climate interactions

    Science.gov (United States)

    Savastru, Dan M.; Zoran, Maria A.; Savastru, Roxana S.

    2016-08-01

    Climate change is considered to be the biggest environmental threat in the future in the South- Eastern part of Europe. In frame of predicted global warming, urban climate is an important issue in scientific research. Surface energy processes have an essential role in urban weather, climate and hydrosphere cycles, as well in urban heat redistribution. This paper investigated the influences of urban growth on thermal environment in relationship with other biophysical variables in Bucharest metropolitan area of Romania. Remote sensing data from Landsat TM/ETM+ and time series MODIS Terra/Aqua sensors have been used to assess urban land cover- climate interactions over period between 2000 and 2015 years. Vegetation abundances and percent impervious surfaces were derived by means of linear spectral mixture model, and a method for effectively enhancing impervious surface has been developed to accurately examine the urban growth. The land surface temperature (Ts), a key parameter for urban thermal characteristics analysis, was also analyzed in relation with the Normalized Difference Vegetation Index (NDVI) at city level. Based on these parameters, the urban growth, and urban heat island effect (UHI) and the relationships of Ts to other biophysical parameters have been analyzed. The correlation analyses revealed that, at the pixel-scale, Ts possessed a strong positive correlation with percent impervious surfaces and negative correlation with vegetation abundances at the regional scale, respectively. This analysis provided an integrated research scheme and the findings can be very useful for urban ecosystem modeling.

  15. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data

    Science.gov (United States)

    Jenson, David; Bowers, Andrew L.; Harkrider, Ashley W.; Thornton, David; Cuellar, Megan; Saltuklaroglu, Tim

    2014-01-01

    Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR speech rehearsal and production. PMID:25071633

  16. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short ...

  17. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. The Temporal Dynamics of Feature Integration for Color, form, and Motion

    Directory of Open Access Journals (Sweden)

    KS Pilz

    2012-07-01

    Full Text Available When two similar visual stimuli are presented in rapid succession, only their fused image is perceived, without having conscious access to the single stimuli. Such feature fusion occurs both for color (eg, Efron1973 and form (eg, Scharnowski et al 2007. For verniers, the fusion process lasts for more than 400 ms, as has been shown using TMS (Scharnowski et al 2009. In three experiments, we used light masks to investigate the time course of feature fusion for color, form, and motion. In experiment one, two verniers were presented in rapid succession with opposite offset directions. Subjects had to indicate the offset direction of the vernier. In a second experiment, a red and a green disk were presented in rapid succession, and subjects had to indicate whether the fused, yellow disk appeared rather than red or green. In a third experiment, three frames of random dots were presented successively. The first two frames created a percept of apparent motion to the upper right; and the last two frames, one to the upper left or vice versa. Subjects had to indicate the direction of motion. All stimuli were presented foveally. In all three experiments, we first balanced performance so that neither the first nor the second stimulus dominated the fused percept. In a second step, a light mask was presented either before, during, or after stimulus presentation. Depending on presentation time, the light masks modulated the fusion process so that either the first or the second stimulus dominated the percept. Our results show that unconscious feature fusion lasts more than five times longer than the actual stimulus duration, which indicates that individual features are stored for a substantial amount of time before they are integrated.

  20. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    David eJenson

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  1. Glial cell contributions to auditory brainstem development

    Directory of Open Access Journals (Sweden)

    Karina S Cramer

    2016-10-01

    Full Text Available Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of specialized auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes, and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits.

  2. Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions.

    Science.gov (United States)

    Crosse, Michael J; Butler, John S; Lalor, Edmund C

    2015-10-21

    Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually. Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to

  3. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  4. [Auditory performance analyses of cochlear implanted patients].

    Science.gov (United States)

    Ozdemir, Süleyman; Kıroğlu, Mete; Tuncer, Ulkü; Sahin, Rasim; Tarkan, Ozgür; Sürmelioğlu, Ozgür

    2011-01-01

    The aim of this study was to analyze the auditory performance development of cochlear implanted patients. The effects of age at implantation, gender, implanted ear and model of the cochlear implant on the patients' auditory performance were investigated. Twenty-eight patients (12 boys, 16 girls) with congenital prelingual hearing loss who underwent cochlear implant surgery at our clinic and a follow-up of at least 18 months were selected for the study. Listening Progress Profile (LiP), Monosyllable-Trochee-Polysyllable (MTP) and Meaningful Auditory Integration Scale (MAIS) tests were performed to analyze the auditory performances of the patients. To determine the effect of the age at implantation on the auditory performance, patients were assigned into two groups: group 1 (implantation age = or <60 months, mean 44.8 months) and group 2 (implantation age = or <60 months, mean 100.6 months). Group 2 had higher preoperative test scores than group 1 but after cochlear implant use, the auditory performance levels of the patients in group 1 improved faster and equalized to those of the patients in group 2 after 12-18 months. Our data showed that variables such as sex, implanted ear or model of the cochlear implant did not have any statistically significant effect on the auditory performance of the patients after cochlear implantation. We found a negative correlation between the implantation age and the auditory performance improvement in our study. We observed that children implanted at young age had a quicker language development and have had more success in reading, writing and other educational skills in the future.

  5. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  6. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  7. Autosomal dominant partial epilepsy with auditory features: Defining the phenotype

    Science.gov (United States)

    Winawer, Melodie R.; Hauser, W. Allen; Pedley, Timothy A.

    2009-01-01

    The authors previously reported linkage to chromosome 10q22-24 for autosomal dominant partial epilepsy with auditory features. This study describes seizure semiology in the original linkage family in further detail. Auditory hallucinations were most common, but other sensory symptoms (visual, olfactory, vertiginous, and cephalic) were also reported. Autonomic, psychic, and motor symptoms were less common. The clinical semiology points to a lateral temporal seizure origin. Auditory hallucinations, the most striking clinical feature, are useful for identifying new families with this synome. PMID:10851389

  8. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  9. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  10. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  11. Leaf δ15N as a temporal integrator of nitrogen-cycling processes at the Mojave Desert FACE experiment

    Science.gov (United States)

    Sonderegger, D.; Koyama, A.; Jin, V.; Billings, S. A.; Ogle, K.; Evans, R. D.

    2011-12-01

    Ecosystem response to elevated carbon dioxide (CO2) in arid environments is regulated primarily by water, which may interact with nitrogen availability. Leaf nitrogen isotope composition (δ15N) can serve as an important indicator of changes in nitrogen dynamics by integrating changes in plant physiology and ecosystem biogeochemical processes. Because of this temporal integration, careful modeling of the antecedent conditions is necessary for understanding the processes driving variation in leaf δ15N. We measured leaf δ15N of Larrea tridentata (creosotebush) over the 10-year lifetime of the Nevada Desert Free-Air CO2 Enrichment (FACE) experiment. Leaf δ15N exhibited two patterns. First, elevated atmospheric CO2 significantly increased Larrea leaf δ15N by approximately 2 to 3 % compared to plants exposed to ambient CO2 concentrations Second, plants in both CO2 treatments exhibited significant seasonal cycles in leaf δ15N, with higher values during the fall and winter seasons. We modeled leaf δ15N using a hierarchical Bayesian framework that incorporated soil moisture, temperature, and the Palmer Drought Severity Index (PDSI) covariates in addition to a CO2 treatment effect and plot random effects. Antecedent moisture effects were modeled by using a combination of the previous season's aggregated conditions and a smoothly varying weighted average of the months or weeks directly preceding the observation. The time lag between the driving antecedent condition and the observed change in leaf δ15N indicates a significant and unobserved process mechanism. Preliminary results suggest a CO2 treatment interaction with the lag effect, indicating a treatment effect on the latent process.

  12. Auditory responsive naming versus visual confrontation naming in dementia.

    Science.gov (United States)

    Miller, Kimberly M; Finney, Glen R; Meador, Kimford J; Loring, David W

    2010-01-01

    Dysnomia is typically assessed during neuropsychological evaluation through visual confrontation naming. Responsive naming to description, however, has been shown to have a more distributed representation in both fMRI and cortical stimulation studies. While naming deficits are common in dementia, the relative sensitivity of visual confrontation versus auditory responsive naming has not been directly investigated. The current study compared visual confrontation naming and auditory responsive naming in a dementia sample of mixed etiologies to examine patterns of performance across these naming tasks. A total of 50 patients with dementia of various etiologies were administered visual confrontation naming and auditory responsive naming tasks using stimuli that were matched in overall word frequency. Patients performed significantly worse on auditory responsive naming than visual confrontation naming. Additionally, patients with mixed Alzheimer's disease/vascular dementia performed more poorly on auditory responsive naming than did patients with probable Alzheimer's disease, although no group differences were seen on the visual confrontation naming task. Auditory responsive naming correlated with a larger number of neuropsychological tests of executive function than did visual confrontation naming. Auditory responsive naming appears to be more sensitive to effects of increased of lesion burden compared to visual confrontation naming. We believe that this reflects more widespread topographical distribution of auditory naming sites within the temporal lobe, but may also reflect the contributions of working memory and cognitive flexibility to performance.

  13. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  14. Auditory object formation affects modulation perception

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2005-01-01

    the target sound in time determine whether or not across-frequency modulation effects are observed. The results suggest that the binding of sound elements into coherent auditory objects precedes aspects of modulation analysis and imply a cortical locus involving integration times of several hundred...

  15. Temporal Properties of Chronic Cochlear Electrical Stimulation Determine Temporal Resolution of Neurons in Cat Inferior Colliculus

    National Research Council Canada - National Science Library

    Maike Vollmer; Russell L. Snyder; Patricia A. Leake; Ralph E. Beitel; Charlotte M. Moore; Stephen J. Rebscher

    1999-01-01

    .... We have developed an animal model of congenital deafness and investigated the effect of electrical stimulus frequency on the temporal resolution of central neurons in the developing auditory system of deaf cats...

  16. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  17. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    Science.gov (United States)

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  18. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    Directory of Open Access Journals (Sweden)

    Argiro eVatakis

    2012-10-01

    Full Text Available We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analysed using an auditory-visual signal saliency model in order to compare signal saliency and behavioural data. Participants made temporal order judgments (TOJs regarding which speech-stream (auditory or visual had been presented first. The sensitivity of participants’ TOJs and the point of subjective simultaneity (PSS were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants’ temporal percept was affected (although not always significantly by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stream.

  19. Audio-Tactile Integration and the Influence of Musical Training

    Science.gov (United States)

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training. PMID:24465675

  20. Audio-tactile integration and the influence of musical training.

    Directory of Open Access Journals (Sweden)

    Anja Kuchenbuch

    Full Text Available Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  1. Metabolic Maturation of Auditory Neurones in the Superior Olivary Complex.

    Directory of Open Access Journals (Sweden)

    Barbara Trattner

    Full Text Available Neuronal activity is energetically costly, but despite its importance, energy production and consumption have been studied in only a few neurone types. Neuroenergetics is of special importance in auditory brainstem nuclei, where neurones exhibit various biophysical adaptations for extraordinary temporal precision and show particularly high firing rates. We have studied the development of energy metabolism in three principal nuclei of the superior olivary complex (SOC involved in precise binaural processing in the Mongolian gerbil (Meriones unguiculatus. We used immunohistochemistry to quantify metabolic markers for energy consumption (Na(+/K(+-ATPase and production (mitochondria, cytochrome c oxidase activity and glucose transporter 3 (GLUT3. In addition, we calculated neuronal ATP consumption for different postnatal ages (P0-90 based upon published electrophysiological and morphological data. Our calculations relate neuronal processes to the regeneration of Na(+ gradients perturbed by neuronal firing, and thus to ATP consumption by Na(+/K(+-ATPase. The developmental changes of calculated energy consumption closely resemble those of metabolic markers. Both increase before and after hearing onset occurring at P12-13 and reach a plateau thereafter. The increase in Na(+/K(+-ATPase and mitochondria precedes the rise in GLUT3 levels and is already substantial before hearing onset, whilst GLUT3 levels are scarcely detectable at this age. Based on these findings we assume that auditory inputs crucially contribute to metabolic maturation. In one nucleus, the medial nucleus of the trapezoid body (MNTB, the initial rise in marker levels and calculated ATP consumption occurs distinctly earlier than in the other nuclei investigated, and is almost completed by hearing onset. Our study shows that the mathematical model used is applicable to brainstem neurones. Energy consumption varies markedly between SOC nuclei with their different neuronal properties

  2. Auditory Midbrain Implant: A Review

    Science.gov (United States)

    Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas

    2009-01-01

    The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428

  3. A case of generalized auditory agnosia with unilateral subcortical brain lesion.

    Science.gov (United States)

    Suh, Hyee; Shin, Yong-Il; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-12-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia.

  4. Temporal cortex activation to audiovisual speech in normal-hearing and cochlear implant users measured with functional near-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Luuk P.H. van de Rijt

    2016-02-01

    Full Text Available BackgroundSpeech understanding may rely not only on auditory, but also on visual information. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans. Nevertheless, noise (from fMRI limits the usefulness in auditory experiments, and electromagnetic artefacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI. Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (fNIRS.MethodsWe studied temporal cortical activation as represented by concentration changes of oxy- and deoxy-hemoglobin in four, easy-to-apply fNIRS optical channels of 33 normal-hearing adult subjects and 5 post-lingually deaf cochlear implant (CI users in response to supra-threshold unisensory auditory and visual, as well as to congruent auditory-visual speech stimuli. ResultsActivation effects were not visible from single fNIRS channels. However, by discounting physiological noise through reference channel subtraction, auditory, visual and audiovisual speech stimuli evoked concentration changes for all sensory modalities in both cohorts (p<0.001. Auditory stimulation evoked larger concentration changes than visual stimuli (p<0.001. A saturation effect was observed for the audiovisual condition.ConclusionsPhysiological, systemic noise can be removed from fNIRS signals by reference channel subtraction. The observed multisensory enhancement of an auditory cortical channel can be plausibly described by a simple addition of the auditory and visual signals with saturation.

  5. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  6. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.

  7. Auditory Cortex Tracks Both Auditory and Visual Stimulus Dynamics Using Low-Frequency Neuronal Phase Modulation

    Science.gov (United States)

    Luo, Huan; Liu, Zuxiang; Poeppel, David

    2010-01-01

    Integrating information across sensory domains to construct a unified representation of multi-sensory signals is a fundamental characteristic of perception in ecological contexts. One provocative hypothesis deriving from neurophysiology suggests that there exists early and direct cross-modal phase modulation. We provide evidence, based on magnetoencephalography (MEG) recordings from participants viewing audiovisual movies, that low-frequency neuronal information lies at the basis of the synergistic coordination of information across auditory and visual streams. In particular, the phase of the 2–7 Hz delta and theta band responses carries robust (in single trials) and usable information (for parsing the temporal structure) about stimulus dynamics in both sensory modalities concurrently. These experiments are the first to show in humans that a particular cortical mechanism, delta-theta phase modulation across early sensory areas, plays an important “active” role in continuously tracking naturalistic audio-visual streams, carrying dynamic multi-sensory information, and reflecting cross-sensory interaction in real time. PMID:20711473

  8. Aktiverende Undervisning i auditorier

    DEFF Research Database (Denmark)

    Parus, Judith

    Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....

  9. Integrated community profiling indicates long-term temporal stability of the predominant faecal microbiota in captive cheetahs.

    Directory of Open Access Journals (Sweden)

    Anne A M J Becker

    Full Text Available Understanding the symbiotic relationship between gut microbes and their animal host requires characterization of the core microbiota across populations and in time. Especially in captive populations of endangered wildlife species such as the cheetah (Acinonyx jubatus, this knowledge is a key element to enhance feeding strategies and reduce gastrointestinal disorders. In order to investigate the temporal stability of the intestinal microbiota in cheetahs under human care, we conducted a longitudinal study over a 3-year period with bimonthly faecal sampling of 5 cheetahs housed in two European zoos. For this purpose, an integrated 16S rRNA DGGE-clone library approach was used in combination with a series of real-time PCR assays. Our findings disclosed a stable faecal microbiota, beyond intestinal community variations that were detected between zoo sample sets or between animals. The core of this microbiota was dominated by members of Clostridium clusters I, XI and XIVa, with mean concentrations ranging from 7.5-9.2 log10 CFU/g faeces and with significant positive correlations between these clusters (P<0.05, and by Lactobacillaceae. Moving window analysis of DGGE profiles revealed 23.3-25.6% change between consecutive samples for four of the cheetahs. The fifth animal in the study suffered from intermediate episodes of vomiting and diarrhea during the monitoring period and exhibited remarkably more change (39.4%. This observation may reflect the temporary impact of perturbations such as the animal's compromised health, antibiotic administration or a combination thereof, which temporarily altered the relative proportions of Clostridium clusters I and XIVa. In conclusion, this first long-term monitoring study of the faecal microbiota in feline strict carnivores not only reveals a remarkable compositional stability of this ecosystem, but also shows a qualitative and quantitative similarity in a defined set of faecal bacterial lineages across the five

  10. Noise differentially impacts phoneme representations in the auditory and speech motor systems.

    Science.gov (United States)

    Du, Yi; Buchsbaum, Bradley R; Grady, Cheryl L; Alain, Claude

    2014-05-13

    Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (-12, -9, -6, -2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca's area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca's area exhibited effective phoneme categorization when SNR ≥ -6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.

  11. Temporal Processing in Audition: Insights from Music.

    Science.gov (United States)

    Rajendran, Vani G; Teki, Sundeep; Schnupp, Jan W H

    2017-11-03

    Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation. Perception of musical rhythms relies on an ability to form temporal predictions, a general feature of temporal processing that is equally relevant to auditory scene analysis, pattern detection, and speech perception. By bringing together findings from the music and auditory literature, we hope to inspire researchers to look beyond the conventions of their respective fields and consider the cross-disciplinary implications of studying auditory temporal sequence processing. We begin by highlighting music as an interesting sound stimulus that may provide clues to how temporal patterning in sound drives perception. Next, we review the SMS literature and discuss possible neural substrates for the perception of, and synchronization to, musical beat. We then move away from music to explore the perceptual effects of rhythmic timing in pattern detection, auditory scene analysis, and speech perception. Finally, we review the neurophysiology of general timing processes that may underlie aspects of the perception of rhythmic patterns. We conclude with a brief summary and outlook for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  13. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-01-01

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top–down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience. PMID:26831102

  14. Auditory streaming by phase relations between components of harmonic complexes: a comparative study of human subjects and bird forebrain neurons.

    Science.gov (United States)

    Dolležal, Lena-Vanessa; Itatani, Naoya; Günther, Stefanie; Klump, Georg M

    2012-12-01

    Auditory streaming describes a percept in which a sequential series of sounds either is segregated into different streams or is integrated into one stream based on differences in their spectral or temporal characteristics. This phenomenon has been analyzed in human subjects (psychophysics) and European starlings (neurophysiology), presenting harmonic complex (HC) stimuli with different phase relations between their frequency components. Such stimuli allow evaluating streaming by temporal cues, as these stimuli only vary in the temporal waveform but have identical amplitude spectra. The present study applied the commonly used ABA- paradigm (van Noorden, 1975) and matched stimulus sets in psychophysics and neurophysiology to evaluate the effects of fundamental frequency (f₀), frequency range (f(LowCutoff)), tone duration (TD), and tone repetition time (TRT) on streaming by phase relations of the HC stimuli. By comparing the percept of humans with rate or temporal responses of avian forebrain neurons, a neuronal correlate of perceptual streaming of HC stimuli is described. The differences in the pattern of the neurons' spike rate responses provide for a better explanation for the percept observed in humans than the differences in the temporal responses (i.e., the representation of the periodicity in the timing of the action potentials). Especially for HC stimuli with a short 40-ms duration, the differences in the pattern of the neurons' temporal responses failed to represent the patterns of human perception, whereas the neurons' rate responses showed a good match. These results suggest that differential rate responses are a better predictor for auditory streaming by phase relations than temporal responses.

  15. Auditory hallucinations induced by trazodone

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  16. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  18. Syllabic (∼2-5 Hz) and fluctuation (∼1-10 Hz) ranges in speech and auditory processing.

    Science.gov (United States)

    Edwards, Erik; Chang, Edward F

    2013-11-01

    Given recent interest in syllabic rates (∼2-5 Hz) for speech processing, we review the perception of "fluctuation" range (∼1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (∼2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech.

    Science.gov (United States)

    Nath, Audrey R; Beauchamp, Michael S

    2011-02-02

    Humans are remarkably adept at understanding speech, even when it is contaminated by noise. Multisensory integration may explain some of this ability: combining independent information from the auditory modality (vocalizations) and the visual modality (mouth movements) reduces noise and increases accuracy. Converging evidence suggests that the superior temporal sulcus (STS) is a critical brain area for multisensory integration, but little is known about its role in the perception of noisy speech. Behavioral studies have shown that perceptual judgments are weighted by the reliability of the sensory modality: more reliable modalities are weighted more strongly, even if the reliability changes rapidly. We hypothesized that changes in the functional connectivity of STS with auditory and visual cortex could provide a neural mechanism for perceptual reliability weighting. To test this idea, we performed five blood oxygenation level-dependent functional magnetic resonance imaging and behavioral experiments in 34 healthy subjects. We found increased functional connectivity between the STS and auditory cortex when the auditory modality was more reliable (less noisy) and increased functional connectivity between the STS and visual cortex when the visual modality was more reliable, even when the reliability changed rapidly during presentation of successive words. This finding matched the results of a behavioral experiment in which the perception of incongruent audiovisual syllables was biased toward the more reliable modality, even with rapidly changing reliability. Changes in STS functional connectivity may be an important neural mechanism underlying the perception of noisy speech.

  20. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    Science.gov (United States)

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately

  1. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  2. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  3. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Hannah L. Golden

    2015-01-01

    Full Text Available Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13 and age-matched healthy individuals (n = 17 underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  4. Multi-Regional Adaptation in Human Auditory Association Cortex

    Directory of Open Access Journals (Sweden)

    Urszula Malinowska

    2017-05-01

    Full Text Available In auditory cortex, neural responses decrease with stimulus repetition, known as adaptation. Adaptation is thought to facilitate detection of novel sounds and improve perception in noisy environments. Although it is well established that adaptation occurs in primary auditory cortex, it is not known whether adaptation also occurs in higher auditory areas involved in processing complex sounds, such as speech. Resolving this issue is important for understanding the neural bases of adaptation and to avoid potential post-operative deficits after temporal lobe surgery for treatment of focal epilepsy. Intracranial electrocorticographic recordings were acquired simultaneously from electrodes implanted in primary and association auditory areas of the right (non-dominant temporal lobe in a patient with complex partial seizures originating from the inferior parietal lobe. Simple and complex sounds were presented in a passive oddball paradigm. We measured changes in single-trial high-gamma power (70–150 Hz and in regional and inter-regional network-level activity indexed by cross-frequency coupling. Repetitive tones elicited the greatest adaptation and corresponding increases in cross-frequency coupling in primary auditory cortex. Conversely, auditory association cortex showed stronger adaptation for complex sounds, including speech. This first report of multi-regional adaptation in human auditory cortex highlights the role of the non-dominant temporal lobe in suppressing neural responses to repetitive background sounds (noise. These results underscore the clinical utility of functional mapping to avoid potential post-operative deficits including increased listening difficulties in noisy, real-world environments.

  5. Neural correlates of audiovisual temporal processing--comparison of temporal order and simultaneity judgments.

    Science.gov (United States)

    Binder, M

    2015-08-06

    Multisensory integration is one of the essential features of perception. Though the processing of spatial information is an important clue to understand its mechanisms, a complete knowledge cannot be achieved without taking into account the processing of temporal information. Simultaneity judgments (SJs) and temporal order judgments (TOJs) are the two most widely used procedures for explicit estimation of temporal relations between sensory stimuli. Behavioral studies suggest that both tasks recruit different sets of cognitive operations. On the other hand, empirical evidence related to their neuronal underpinnings is still scarce, especially with regard to multisensory stimulation. The aim of the current fMRI study was to explore neural correlates of both tasks using paradigm with audiovisual stimuli. Fifteen subjects performed TOJ and SJ tasks grouped in 18-second blocks. Subjects were asked to estimate onset synchrony or temporal order of onsets of non-semantic auditory and visual stimuli. Common areas of activation elicited by both tasks were found in the bilateral fronto-parietal network, including regions whose activity can be also observed in tasks involving spatial selective attention. This can be regarded as an evidence for the hypothesis that tasks involving selection based on temporal information engage the similar regions as the attentional tasks based on spatial information. The direct contrast between the SJ task and the TOJ task did not reveal any regions showing stronger activity for SJ task than in TOJ task. The reverse contrast revealed a number of left hemisphere regions which were more active during the TOJ task than the SJ task. They were found in the prefrontal cortex, the parietal lobules (superior and inferior) and in the occipito-temporal regions. These results suggest that the TOJ task requires recruitment of additional cognitive operations in comparison to SJ task. They are probably associated with forming representations of stimuli as

  6. High spatial-temporal resolution and integrated surface and subsurface precipitation-runoff modelling for a small stormwater catchment

    Science.gov (United States)

    Hailegeorgis, Teklu T.; Alfredsen, Knut

    2018-02-01

    Reliable runoff estimation is important for design of water infrastructure and flood risk management in urban catchments. We developed a spatially distributed Precipitation-Runoff (P-R) model that explicitly represents the land cover information, performs integrated modelling of surface and subsurface components of the urban precipitation water cycle and flow routing. We conducted parameter calibration and validation for a small (21.255 ha) stormwater catchment in Trondheim City during Summer-Autumn events and season, and snow-influenced Winter-Spring seasons at high spatial and temporal resolutions of respectively 5 m × 5 m grid size and 2 min. The calibration resulted in good performance measures (Nash-Sutcliffe efficiency, NSE = 0.65-0.94) and acceptable validation NSE for the seasonal and snow-influenced periods. The infiltration excess surface runoff dominates the peak flows while the contribution of subsurface flow to the sewer pipes also augments the peak flows. Based on the total volumes of simulated flow in sewer pipes (Qsim) and precipitation (P) during the calibration periods, the Qsim/P ranges from 21.44% for an event to 56.50% for the Winter-Spring season, which are in close agreement with the observed volumes (Qobs/P). The lowest percentage of precipitation volume that is transformed to the total simulated runoff in the catchment (QT) is 79.77%. Computation of evapotranspiration (ET) indicated that the ET/P is less than 3% for the events and snow-influenced seasons while it is about 18% for the Summer-Autumn season. The subsurface flow contribution to the sewer pipes are markedly higher than the total surface runoff volume for some events and the Summer-Autumn season. The peakiest flow rates correspond to the Winter-Spring season. Therefore, urban runoff simulation for design and management purposes should include two-way interactions between the subsurface runoff and flow in sewer pipes, and snow-influenced seasons. The developed urban P-R model is

  7. Auditory aura in frontal opercular epilepsy: sounds from afar.

    Science.gov (United States)

    Thompson, Stephen A; Alexopoulos, Andreas; Bingaman, William; Gonzalez-Martinez, Jorge; Bulacio, Juan; Nair, Dileep; So, Norman K

    2015-06-01

    Auditory auras are typically considered to localize to the temporal neocortex. Herein, we present two cases of frontal operculum/perisylvian epilepsy with auditory auras. Following a non-invasive evaluation, including ictal SPECT and magnetoencephalography, implicating the frontal operculum, these cases were evaluated with invasive monitoring, using stereoelectroencephalography and subdural (plus depth) electrodes, respectively. Spontaneous and electrically-induced seizures showed an ictal onset involving the frontal operculum in both cases. A typical auditory aura was triggered by stimulation of the frontal operculum in one. Resection of the frontal operculum and subjacent insula rendered one case seizure- (and aura-) free. From a hodological (network) perspective, we discuss these findings with consideration of the perisylvian and insular network(s) interconnecting the frontal and temporal lobes, and revisit the non-invasive data, specifically that of ictal SPECT.

  8. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  9. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  10. Long-term music training tunes how the brain temporally binds signals from multiple senses.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2011-12-20

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics-fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry. Critically, the premotor asynchrony effects predicted musicians' perceptual sensitivity to audiovisual asynchrony. Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds. This internal forward model furnishes more precise estimates of the relative audiovisual timings and hence, stronger prediction error signals specifically for asynchronous music in a premotor-cerebellar circuitry. Our findings show intimate links between action production and audiovisual temporal binding in perception.

  11. Octave effect in auditory attention

    National Research Council Canada - National Science Library

    Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee

    2013-01-01

    ... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...

  12. The reading ability of good and poor temporal processors among a group of college students.

    Science.gov (United States)

    Au, Agnes; Lovegrove, Bill

    2008-05-01

    In this study, we examined whether good auditory and good visual temporal processors were better than their poor counterparts on certain reading measures. Various visual and auditory temporal tasks were administered to 105 undergraduates. They read some phonologically regular pseudowords and irregular words that were presented sequentially in the same ("word" condition) and in different ("line" condition) locations. Results indicated that auditory temporal acuity was more relevant to reading, whereas visual temporal acuity was more relevant to spelling. Good auditory temporal processors did not have the advantage in processing pseudowords, even though pseudoword reading correlated significantly with auditory temporal processing. These results suggested that some higher cognitive or phonological processes mediated the relationship between auditory temporal processing and pseudoword reading. Good visual temporal processors did not have the advantage in processing irregular words. They also did not process the line condition more accurately than the word condition. The discrepancy might be attributed to the use of normal adults and the unnatural reading situation that did not fully capture the function of the visual temporal processes. The distributions of auditory and visual temporal processing abilities were co-occurring to some degree, but they maintained considerable independence. There was also a lack of a relationship between the type and severity of reading deficits and the type and number of temporal deficits.

  13. Reduced P50 Auditory Sensory Gating Response in Professional Musicians

    Science.gov (United States)

    Kizkin, Sibel; Karlidag, Rifat; Ozcan, Cemal; Ozisik, Handan Isin

    2006-01-01

    Evoked potential studies have demonstrated that musicians have the ability to distinguish musical sounds preattentively and automatically at the temporal, spectral, and spatial levels in more detail. It is however not known whether there is a difference in the early processes of auditory data processing of musicians. The most emphasized and…

  14. Quantifying stimulus-response rehabilitation protocols by auditory feed