WorldWideScience

Sample records for auditory temporal integration

  1. Subthreshold outward currents enhance temporal integration in auditory neurons.

    Science.gov (United States)

    Svirskis, Gytis; Dodla, Ramana; Rinzel, John

    2003-11-01

    Many auditory neurons possess low-threshold potassium currents ( I(KLT)) that enhance their responsiveness to rapid and coincident inputs. We present recordings from gerbil medial superior olivary (MSO) neurons in vitro and modeling results that illustrate how I(KLT) improves the detection of brief signals, of weak signals in noise, and of the coincidence of signals (as needed for sound localization). We quantify the enhancing effect of I(KLT) on temporal processing with several measures: signal-to-noise ratio (SNR), reverse correlation or spike-triggered averaging of input currents, and interaural time difference (ITD) tuning curves. To characterize how I(KLT), which activates below spike threshold, influences a neuron's voltage rise toward threshold, i.e., how it filters the inputs, we focus first on the response to weak and noisy signals. Cells and models were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the "noise"), together with a larger but still subthreshold postsynaptic conductance, EPSG (the "signal"). Reduction of I(KLT) decreased the SNR, mainly due to an increase in spontaneous firing (more "false positive"). The spike-triggered reverse correlation indicated that I(KLT) shortened the integration time for spike generation. I(KLT) also heightened the model's timing selectivity for coincidence detection of simulated binaural inputs. Further, ITD tuning is shifted in favor of a slope code rather than a place code by precise and rapid inhibition onto MSO cells (Brand et al. 2002). In several ways, low-threshold outward currents are seen to shape integration of weak and strong signals in auditory neurons. PMID:14669013

  2. Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control

    Science.gov (United States)

    Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun

    2016-01-01

    Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768

  3. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  4. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  5. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  6. Implicit temporal expectation attenuates auditory attentional blink.

    Directory of Open Access Journals (Sweden)

    Dawei Shen

    Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.

  7. Implicit Temporal Expectation Attenuates Auditory Attentional Blink

    OpenAIRE

    Shen, Dawei; Alain, Claude

    2012-01-01

    Attentional blink (AB) describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe) nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20%) that the ...

  8. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h

  9. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  10. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson’s Disease

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1–3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion–extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  11. Auditory Evoked Fields Elicited by Spectral, Temporal, and Spectral–Temporal Changes in Human Cerebral Cortex

    OpenAIRE

    ChristoPantev; HidehikoOkamoto

    2012-01-01

    Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoe...

  12. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica Gori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  13. Language and central temporal auditory processing in childhood epilepsies.

    Science.gov (United States)

    Boscariol, Mirela; Casali, Raquel L; Amaral, M Isabel R; Lunardi, Luciane L; Matas, Carla G; Collela-Santos, M Francisca; Guerreiro, Marilisa M

    2015-12-01

    Because of the relationship between rolandic, temporoparietal, and centrotemporal areas and language and auditory processing, the aim of this study was to investigate language and central temporal auditory processing of children with epilepsy (rolandic epilepsy and temporal lobe epilepsy) and compare these with those of children without epilepsy. Thirty-five children aged between eight and 14 years old were studied. Two groups of children participated in this study: a group with childhood epilepsy (n=19), and a control group without epilepsy or linguistic changes (n=16). There was a significant difference between the two groups, with the worst performance in children with epilepsy for the gaps-in-noise test, right ear (preceptive vocabulary (PPVT) (p<0.001) and phonological working memory (nonwords repetition task) tasks (p=0.001). We conclude that the impairment of central temporal auditory processing and language skills may be comorbidities in children with rolandic epilepsy and temporal lobe epilepsy. PMID:26580215

  14. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    ChristoPantev

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  15. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  16. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949

  17. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  18. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  19. Left temporal lobe structural and functional abnormality underlying auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2009-05-01

    Full Text Available In this article, we review recent findings from our laboratory that auditory hallucinations in schizophrenia are internally generated speech mis-representations lateralized to the left superior temporal gyrus and sulcus. Such experiences are, moreover, not cognitively suppressed due to enhanced attention to the voices and failure of fronto-parietal executive control functions. An overview of diagnostic questionnaires for scoring of symptoms is presented, together with a review of behavioural, structural and functional MRI data. Functional imaging data have either shown increased or decreased activation depending on whether patients have been presented an external stimulus or not during scanning. Structural imaging data have shown reduction of grey matter density and volume in the same areas in the temporal lobe. The behavioral and neuroimaging findings are moreover hypothesized to be related to glutamate hypofunction in schizophrenia. We propose a model for the understanding of auditory hallucinations that trace the origin of auditory hallucinations to uncontrolled neuronal firing in the speech areas in the left temporal lobe, which is not suppressed by volitional cognitive control processes, due to dysfunctional fronto-parietal executive cortical networks.

  20. Rapid auditory learning of temporal gap detection.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2016-07-01

    The rapid initial phase of training-induced improvement has been shown to reflect a genuine sensory change in perception. Several features of early and rapid learning, such as generalization and stability, remain to be characterized. The present study demonstrated that learning effects from brief training on a temporal gap detection task using spectrally similar narrowband noise markers defining the gap (within-channel task), transfer across ears, however, not across spectrally dissimilar markers (between-channel task). The learning effects associated with brief training on a gap detection task were found to be stable for at least a day. These initial findings have significant implications for characterizing early and rapid learning effects. PMID:27475211

  1. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Science.gov (United States)

    Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C

    2015-01-01

    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys. PMID:26041980

  2. Auditory dominance in motor-sensory temporal recalibration.

    Science.gov (United States)

    Sugano, Yoshimori; Keetels, Mirjam; Vroomen, Jean

    2016-05-01

    Perception of synchrony between one's own action (e.g. a finger tap) and the sensory feedback thereof (e.g. a flash or click) can be shifted after exposure to an induced delay (temporal recalibration effect, TRE). It remains elusive, however, whether the same mechanism underlies motor-visual (MV) and motor-auditory (MA) TRE. We examined this by measuring crosstalk between MV- and MA-delayed feedbacks. During an exposure phase, participants pressed a mouse at a constant pace while receiving visual or auditory feedback that was either delayed (+150 ms) or subjectively synchronous (+50 ms). During a post-test, participants then tried to tap in sync with visual or auditory pacers. TRE manifested itself as a compensatory shift in the tap-pacer asynchrony (a larger anticipation error after exposure to delayed feedback). In experiment 1, MA and MV feedback were either both synchronous (MV-sync and MA-sync) or both delayed (MV-delay and MA-delay), whereas in experiment 2, different delays were mixed across alternating trials (MV-sync and MA-delay or MV-delay and MA-sync). Exposure to consistent delays induced equally large TREs for auditory and visual pacers with similar build-up courses. However, with mixed delays, we found that synchronized sounds erased MV-TRE, but synchronized flashes did not erase MA-TRE. These results suggest that similar mechanisms underlie MA- and MV-TRE, but that auditory feedback is more potent than visual feedback to induce a rearrangement of motor-sensory timing. PMID:26610349

  3. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus;

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  4. Increment of brain temporal perfusion during auditory stimulation

    International Nuclear Information System (INIS)

    A study on the dynamic exploration of the auditory pathway is presented, in which technetium-99m hexamethylpropylene amine oxime single-photon emission computed tomography (SPET) was used in volunteers with normal hearing. Changes in 99mTc-HMPAO distribution were calculated using a region of interest/whole-brain count ratio. The results showed a temporal perfusion increment of 17% (right) and 19% (left) during tonal supraliminar stimulation, which was significantly different from the control ROI. Sensitivity tests for the method were requested before any clinical application. (orig.)

  5. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    Science.gov (United States)

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  6. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi Toida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  7. Specialized prefrontal “auditory fields”: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    MariaMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal “auditory field” as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  8. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up...... physiological mechanisms of binaural processing in mammals; integration of the different stimulus features into auditory scene analysis; physiological mechanisms related to the formation of auditory objects; speech perception; and limitations of auditory perception resulting from hearing disorders....

  9. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    OpenAIRE

    David E Jenson; Bowers, Andrew L.

    2015-01-01

    Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. ...

  10. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm

    OpenAIRE

    Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dors...

  11. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  12. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519

  13. Neural Representations of Complex Temporal Modulations in the Human Auditory Cortex

    OpenAIRE

    Ding, Nai; Simon, Jonathan Z.

    2009-01-01

    Natural sounds such as speech contain multiple levels and multiple types of temporal modulations. Because of nonlinearities of the auditory system, however, the neural response to multiple, simultaneous temporal modulations cannot be predicted from the neural responses to single modulations. Here we show the cortical neural representation of an auditory stimulus simultaneously frequency modulated (FM) at a high rate, fFM ≈ 40 Hz, and amplitude modulation (AM) at a slow rate, fAM

  14. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  15. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties. PMID:24264076

  16. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders. PMID:25526698

  17. Deficit of auditory temporal processing in children with dyslexia-dysgraphia

    Directory of Open Access Journals (Sweden)

    Sima Tajik

    2012-12-01

    Full Text Available Background and Aim: Auditory temporal processing reveals an important aspect of auditory performance, in which a deficit can prevent the child from speaking, language learning and reading. Temporal resolution, which is a subgroup of temporal processing, can be evaluated by gap-in-noise detection test. Regarding the relation of auditory temporal processing deficits and phonologic disorder of children with dyslexia-dysgraphia, the aim of this study was to evaluate these children with the gap-in-noise (GIN test.Methods: The gap-in-noise test was performed on 28 normal and 24 dyslexic-dysgraphic children, at the age of 11-12 years old. Mean approximate threshold and percent of corrected answers were compared between the groups.Results: The mean approximate threshold and percent of corrected answers of the right and left ear had no significant difference between the groups (p>0.05. The mean approximate threshold of children with dyslexia-dysgraphia (6.97 ms, SD=1.09 was significantly (p<0.001 more than that of the normal group (5.05 ms, SD=0.92. The mean related frequency of corrected answers (58.05, SD=4.98% was less than normal group (69.97, SD=7.16% (p<0.001.Conclusion: Abnormal temporal resolution was found in children with dyslexia-dysgraphia based on gap-in-noise test. While the brainstem and auditory cortex are responsible for auditory temporal processing, probably the structural and functional differences of these areas in normal and dyslexic-dysgraphic children lead to abnormal coding of auditory temporal information. As a result, auditory temporal processing is inevitable.

  18. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  19. Effects of different types of auditory temporal training on language skills: a systematic review

    Directory of Open Access Journals (Sweden)

    Cristina Ferraz Borges Murphy

    2013-10-01

    Full Text Available Previous studies have investigated the effects of auditory temporal training on language disorders. Recently, the effects of new approaches, such as musical training and the use of software, have also been considered. To investigate the effects of different auditory temporal training approaches on language skills, we reviewed the available literature on musical training, the use of software and formal auditory training by searching the SciELO, MEDLINE, LILACS-BIREME and EMBASE databases. Study Design: Systematic review. Results: Using evidence levels I and II as the criteria, 29 of the 523 papers found were deemed relevant to one of the topics (use of software - 13 papers; formal auditory training - six papers; and musical training - 10 papers. Of the three approaches, studies that investigated the use of software and musical training had the highest levels of evidence; however, these studies also raised concerns about the hypothesized relationship between auditory temporal processing and language. Future studies are necessary to investigate the actual contribution of these three types of auditory temporal training to language skills.

  20. Spectral vs. Temporal Auditory Processing in Specific Language Impairment: A Developmental ERP Study

    Science.gov (United States)

    Ceponiene, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A.; Townsend, J.

    2009-01-01

    Pre-linguistic sensory deficits, especially in "temporal" processing, have been implicated in developmental language impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral…

  1. Auditory Spectral Integration in the Perception of Static Vowels

    Science.gov (United States)

    Fox, Robert Allen; Jacewicz, Ewa; Chang, Chiung-Yun

    2011-01-01

    Purpose: To evaluate potential contributions of broadband spectral integration in the perception of static vowels. Specifically, can the auditory system infer formant frequency information from changes in the intensity weighting across harmonics when the formant itself is missing? Does this type of integration produce the same results in the lower…

  2. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    Science.gov (United States)

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  3. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. PMID:26343343

  4. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of aud...... compute temporally selective receptive fields are described....

  5. [Unilateral auditory hallucinations due to left temporal lobe ischemia: a case report].

    Science.gov (United States)

    Anegawa, T; Hara, K; Yamamoto, K; Matsuda, M

    1995-10-01

    . The Wechsler adult intelligence scale revealed a verbal IQ of 91 and a performance IQ of 100. Pure tone audiometry revealed bilateral, mild peripheral sensorineural hearing loss. Brainstem auditory evoked potentials were unrevealing. The EEG showed slow activities in the left temporoparietal region. Magnetic resonance imaging of the brain failed to reveal any relevant abnormalities except for an old hemorrhagic parietal infarct. The SPECT with Tc99m-HMPAO, however, showed reduced blood flow in the left temporal lobe including the first temporal convolution as well as in the left parietal lobe. Based on the SPECT findings, unilateral auditory hallucinations in our patient are considered to have resulted from the left temporal lobe ischemia. Our case indicates that unilateral auditory hallucinations may have a clinicoanatomical correlation with contralateral temporal lobe lesions. PMID:8821499

  6. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe.

    Science.gov (United States)

    Berns, Gregory S; Cook, Peter F; Foxley, Sean; Jbabdi, Saad; Miller, Karla L; Marino, Lori

    2015-07-22

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative' regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  7. Beat gestures modulate auditory integration in speech perception

    OpenAIRE

    Biau, Emmanuel; Soto-Faraco, Salvador, 1970-

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory com...

  8. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  9. Temporal feature integration for music genre classification

    OpenAIRE

    Meng, Anders; Ahrendt, Peter; Larsen, Jan; Hansen, Lars Kai

    2007-01-01

    Temporal feature integration is the process of combining all the feature vectors in a time window into a single feature vector in order to capture the relevant temporal information in the window. The mean and variance along the temporal dimension are often used for temporal feature integration, but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music ge...

  10. Inactivation of the left auditory cortex impairs temporal discrimination in the rat

    Czech Academy of Sciences Publication Activity Database

    Rybalko, Natalia; Šuta, Daniel; Popelář, Jiří; Syka, Josef

    2010-01-01

    Roč. 209, č. 1 (2010), s. 123-130. ISSN 0166-4328 R&D Projects: GA ČR GA309/07/1336; GA MŠk(CZ) LC554 Institutional research plan: CEZ:AV0Z50390512 Keywords : auditory cortex * temporal discrimination * hemispheric lateralization Subject RIV: FH - Neurology Impact factor: 3.393, year: 2010

  11. An auditory illusion of infinite tempo change based on multiple temporal levels.

    Directory of Open Access Journals (Sweden)

    Guy Madison

    Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.

  12. Local field potential correlates of auditory working memory in primate dorsal temporal pole.

    Science.gov (United States)

    Bigelow, James; Ng, Chi-Wing; Poremba, Amy

    2016-06-01

    Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special

  13. Temporal Feature Integration for Music Organisation

    OpenAIRE

    Meng, Anders; Larsen, Jan; Hansen, Lars Kai

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisa...

  14. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. PMID:27255816

  15. Resolução temporal auditiva em idosos Auditory temporal resolution in elderly people

    Directory of Open Access Journals (Sweden)

    Flávia Duarte Liporaci

    2010-12-01

    Full Text Available OBJETIVO: Avaliar o processamento auditivo em idosos por meio do teste de resolução temporal Gaps in Noise e verificar se a presença de perda auditiva influencia no desempenho nesse teste. MÉTODOS: Sessenta e cinco ouvintes idosos, entre 60 e 79 anos, foram avaliados por meio do teste Gaps In Noise. Para seleção da amostra foram realizados: anamnese, mini-exame do estado mental e avaliação audiológica básica. Os participantes foram alocados e estudados em um grupo único e posteriormente divididos em três grupos segundo os resultados audiométricos nas frequências de 500 Hz, 1, 2, 3, 4 e 6 kHz. Assim, classificou-se o G1 com audição normal, o G2 com perda auditiva de grau leve e o G3 com perda auditiva de grau moderado. RESULTADOS: Em toda a amostra, as médias de limiar de detecção de gap e de porcentagem de acertos foram de 8,1 ms e 52,6% para a orelha direita e de 8,2 ms e 52,2% para a orelha esquerda. No G1, estas medidas foram de 7,3 ms e 57,6% para a orelha direita e de 7,7 ms e 55,8% para a orelha esquerda. No G2, estas medidas foram de 8,2 ms e 52,5% para a orelha direita e de 7,9 ms e 53,2% para a orelha esquerda. No G3, estas medidas foram de 9,2 ms e 45,2% para as orelhas direita e esquerda. CONCLUSÃO: A presença de perda auditiva elevou os limiares de detecção de gap e diminuiu a porcentagem de acertos no teste Gaps In Noise.PURPOSE: To assess the auditory processing of elderly patients using the temporal resolution Gaps-in-Noise test, and to verify if the presence of hearing loss influences the performance on this test. METHODS: Sixty-five elderly listeners, with ages between 60 and 79 years, were assessed with the Gaps-in-Noise test. To meet the inclusion criteria, the following procedures were carried out: anamnesis, mini-mental state examination, and basic audiological evaluation. The participants were allocated and studied as a group, and then were divided into three groups, according to audiometric results

  16. Spectral vs. temporal auditory processing in Specific Language Impairment: A developmental ERP study

    OpenAIRE

    Čeponienė, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A; Townsend, J.

    2009-01-01

    Pre-linguistic sensory deficits, especially in “temporal” processing, have been implicated in developmental Language Impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral processing, and their interaction, in typical children and those with LI (7–17 years; n=25 per group). The stimuli were 3 CV syllables and 3 consonant-to-vow...

  17. Visual-auditory integration for visual search: a behavioral study in barn owls

    OpenAIRE

    Yael eHazan; Inna eYarin; Yonatan eKra; Hermann eWagner; Yoram eGutfreund

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient st...

  18. Temporal encoding precision of bat auditory neurons tuned to target distance deteriorates on the way to the cortex.

    Science.gov (United States)

    Macías, Silvio; Hechavarría, Julio C; Kössl, Manfred

    2016-03-01

    During echolocation, bats estimate distance to avoid obstacles and capture moving prey. The primary distance cue is the delay between the bat's emitted echolocation pulse and the return of an echo. In the bat's auditory system, echo delay-tuned neurons that only respond to pulse-echo pairs having a specific echo delay serve target distance calculation. Accurate prey localization should benefit from the spike precision in such neurons. Here we show that delay-tuned neurons in the inferior colliculus of the mustached bat respond with higher temporal precision, shorter latency and shorter response duration than those of the auditory cortex. Based on these characteristics, we suggest that collicular neurons are best suited for a fast and accurate response that could lead to fast behavioral reactions while cortical neurons, with coarser temporal precision and longer latencies and response durations could be more appropriate for integrating acoustic information over time. The latter could be important for the formation of biosonar images. PMID:26785850

  19. The efficacy of the Berard Auditory Integration Training method for learners with attention difficulties / Hannelie Kemp

    OpenAIRE

    Kemp, Johanna Jacoba

    2010-01-01

    Research on the Berard Auditory Integration Training method has shown improvement in the regulation of attention, activity and impulsivity of children whose auditory system have been re-trained. Anecdotal reports have found improvements in sleeping patterns, balance, allergies, eyesight, eating patterns, depression and other seemingly unrelated physiological states. During the Auditory Integration Training (AIT) procedure dynamic music, with a wide range of frequencies, is processed through a...

  20. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    audibility when embedded in similar background interferers, a phenomenon referred to as comodulation masking release (CMR). Knowledge of the auditory processing of amplitude modulations provides therefore crucial information for a better understanding of how the auditory system analyses acoustic scenes. The......Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance their...

  1. Auditory disturbances promote temporal clustering of yawning and stretching in small groups of budgerigars (Melopsittacus undulatus).

    Science.gov (United States)

    Miller, Michael L; Gallup, Andrew C; Vogel, Andrea R; Clark, Anne B

    2012-08-01

    Yawning may serve both social and nonsocial functions. When budgerigars (Melopsittacus undulatus) are briefly held, simulating capture by a predator, the temporal pattern of yawning changes. When this species is observed in a naturalistic setting (undisturbed flock), yawning and also stretching, a related behavior, are mildly contagious. On the basis of these findings, we hypothesized that a stressful event would be followed by the clustering of these behaviors in a group of birds, which may be facilitated both by a standard pattern of responding to a startling stressor and also contagion. In this study, we measured yawning and stretching in 4-bird groups following a nonspecific stressor (loud white noise) for a period of 1 hr, determining whether auditory disturbances alter the timing and frequency of these behaviors. Our results show that stretching, and to a lesser degree yawning, were nonrandomly clumped in time following the auditory disturbances, indicating that the temporal clustering is sensitive to, and enhanced by, environmental stressors while in small groups. No decrease in yawning such as found after handling stress was observed immediately after the loud noise but a similar increase in yawning 20 min after was observed. Future research is required to tease apart the roles of behavioral contagion and a time-setting effect following a startle in this species. This research is of interest because of the potential role that temporal clumping of yawning and stretching could play in both the collective detection of, and response to, local disturbances or predation threats. PMID:22268553

  2. Temporal feature integration for music genre classification

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan;

    2007-01-01

    they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music genre classification. This model gives two different feature sets, the diagonal autoregressive (DAR) and...... multivariate autoregressive (MAR) features which are compared against the baseline mean-variance as well as two other temporal feature integration techniques. Reproducibility in performance ranking of temporal feature integration methods were demonstrated using two data sets with five and eleven music genres...

  3. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria

    2016-09-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." PMID:27325682

  4. Sub-second temporal processing : effects of modality and spatial change on brief visual and auditory time judgments

    OpenAIRE

    Retsa, Chryssoula

    2013-01-01

    The present thesis set out to investigate how sensory modality and spatial presentation influence visual and auditory duration judgments in the millisecond range. The effects of modality and spatial location were explored by considering right and left side presentations of mixed or blocked visual and auditory stimuli. Several studies have shown that perceived duration of a stimulus can be affected by various extra-temporal factors such as modality and spatial position. Audit...

  5. Temporal correlation between auditory neurons and the hippocampal theta rhythm induced by novel stimulations in awake guinea pigs.

    Science.gov (United States)

    Liberman, Tamara; Velluti, Ricardo A; Pedemonte, Marisa

    2009-11-17

    The hippocampal theta rhythm is associated with the processing of sensory systems such as touch, smell, vision and hearing, as well as with motor activity, the modulation of autonomic processes such as cardiac rhythm, and learning and memory processes. The discovery of temporal correlation (phase locking) between the theta rhythm and both visual and auditory neuronal activity has led us to postulate the participation of such rhythm in the temporal processing of sensory information. In addition, changes in attention can modify both the theta rhythm and the auditory and visual sensory activity. The present report tested the hypothesis that the temporal correlation between auditory neuronal discharges in the inferior colliculus central nucleus (ICc) and the hippocampal theta rhythm could be enhanced by changes in sensory stimulation. We presented chronically implanted guinea pigs with auditory stimuli that varied over time, and recorded the auditory response during wakefulness. It was observed that the stimulation shifts were capable of producing the temporal phase correlations between the theta rhythm and the ICc unit firing, and they differed depending on the stimulus change performed. Such correlations disappeared approximately 6 s after the change presentation. Furthermore, the power of the hippocampal theta rhythm increased in half of the cases presented with a stimulation change. Based on these data, we propose that the degree of correlation between the unitary activity and the hippocampal theta rhythm varies with--and therefore may signal--stimulus novelty. PMID:19716364

  6. Temporal Feature Integration for Music Organisation

    DEFF Research Database (Denmark)

    Meng, Anders

    2006-01-01

    organisation. A special emphasis is put on the product probability kernel for which the MAR model is derived in closed form. A thorough investigation, using robust machine learning methods, of the MAR model on two different music genre classification datasets, shows a statistical significant improvement using......This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods...... for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisation. Human evaluations of these, have been obtained to access the subjectivity on the datasets...

  7. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  8. Age-related improvements in auditory temporal resolution in reading-impaired children.

    Science.gov (United States)

    Hautus, Michael J; Setchell, Gregory J; Waldie, Karen E; Kirk, Ian J

    2003-02-01

    Individuals with developmental dyslexia show impairments in processing that require precise timing of sensory events. Here, we show that in a test of auditory temporal acuity (a gap-detection task) children ages 6-9 years with dyslexia exhibited a significant deficit relative to age-matched controls. In contrast, this deficit was not observed in groups of older reading-impaired individuals (ages 10-11 years; 12-13 years) or in adults (ages 23-25 years). It appears, therefore, that early temporal resolution deficits in those with reading impairments may significantly ameliorate over time. However, the occurrence of an early deficit in temporal acuity may be antecedent to other language-related perceptual problems (particularly those related to phonological processing) that persist after the primary deficit has resolved. This result suggests that if remedial interventions targeted at temporal resolution deficits are to be effective, the early detection of the deficit and early application of the remedial programme is especially critical. PMID:12625375

  9. Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

    Science.gov (United States)

    Ellis, Robert J.; Duan, Zhiyan; Wang, Ye

    2014-01-01

    “Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal stability of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications. PMID:25469636

  10. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Sophie Molholm

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  11. Mechanisms of spectral and temporal integration in the mustached bat inferior colliculus

    Directory of Open Access Journals (Sweden)

    Jeffrey James Wenstrup

    2012-10-01

    Full Text Available This review describes mechanisms and circuitry underlying combination-sensitive response properties in the auditory brainstem and midbrain. Combination-sensitive neurons, performing a type of auditory spectro-temporal integration, respond to specific, properly timed combinations of spectral elements in vocal signals and other acoustic stimuli. While these neurons are known to occur in the auditory forebrain of many vertebrate species, the work described here establishes their origin in the auditory brainstem and midbrain. Focusing on the mustached bat, we review several major findings: 1 Combination-sensitive responses involve facilitatory interactions, inhibitory interactions, or both when activated by distinct spectral elements in complex sounds. 2 Combination-sensitive responses are created in distinct stages: inhibition arises mainly in lateral lemniscal nuclei of the auditory brainstem, while facilitation arises in the inferior colliculus (IC of the midbrain. 3 Spectral integration underlying combination-sensitive responses requires a low frequency input tuned well below a neuron’s characteristic frequency (ChF. Low-ChF neurons in the auditory brainstem project to high-ChF regions in brainstem or IC to create combination sensitivity. 4 At their sites of origin, both facilitatory and inhibitory combination-sensitive interactions depend on glycinergic inputs and are eliminated by glycine receptor blockade. Surprisingly, facilitatory interactions in IC depend almost exclusively on glycinergic inputs and are largely independent of glutamatergic and GABAergic inputs. 5 The medial nucleus of the trapezoid body, the lateral lemniscal nuclei, and the IC play critical roles in creating combination-sensitive responses. We propose that these mechanisms, based on work in the mustached bat, apply to a broad range of mammals and other vertebrates that depend on temporally sensitive integration of information across the audible spectrum.

  12. Multisensory temporal integration: Task and stimulus dependencies

    Science.gov (United States)

    Stevenson, Ryan A.; Wallace, Mark T.

    2013-01-01

    The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the importance of temporal factors in multisensory integration under both normal and atypical circumstances such as autism and dyslexia, the TBW has been measured with a variety of experimental protocols that differ according to criterion, task, and stimulus type, making comparisons across experiments difficult. In the current study we attempt to elucidate the role that these various factors play in the measurement of this important construct. The results show a strong effect of stimulus type, with the TBW assessed with speech stimuli being both larger and more symmetrical than that seen using simple and complex non-speech stimuli. These effects are robust across task and statistical criteria, and are highly consistent within individuals, suggesting substantial overlap in the neural and cognitive operations that govern multisensory temporal processes. PMID:23604624

  13. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  14. Temporal expectation and attention jointly modulate auditory oscillatory activity in the beta band.

    Directory of Open Access Journals (Sweden)

    Ana Todorovic

    Full Text Available The neural response to a stimulus is influenced by endogenous factors such as expectation and attention. Current research suggests that expectation and attention exert their effects in opposite directions, where expectation decreases neural activity in sensory areas, while attention increases it. However, expectation and attention are usually studied either in isolation or confounded with each other. A recent study suggests that expectation and attention may act jointly on sensory processing, by increasing the neural response to expected events when they are attended, but decreasing it when they are unattended. Here we test this hypothesis in an auditory temporal cueing paradigm using magnetoencephalography in humans. In our study participants attended to, or away from, tones that could arrive at expected or unexpected moments. We found a decrease in auditory beta band synchrony to expected (versus unexpected tones if they were unattended, but no difference if they were attended. Modulations in beta power were already evident prior to the expected onset times of the tones. These findings suggest that expectation and attention jointly modulate sensory processing.

  15. A physiologically inspired model of auditory stream segregation based on a temporal coherence analysis

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2012-01-01

    The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422...... activity across frequency. Using this approach, the described model is able to quantitatively account for classical streaming phenomena relying on frequency separation and tone presentation rate, such as the temporal coherence boundary and the fission boundary [L. P. A. S. van Noorden, doctoral...... dissertation, Institute for Perception Research, Eindhoven, NL, (1975)]. The same model also accounts for the perceptual grouping of distant spectral components in the case of synchronous presentation. The most essential components of the front-end and back-end processing in the framework of the presented...

  16. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  17. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to...

  18. Implicit learning of between-group intervals in auditory temporal structures.

    Science.gov (United States)

    Terry, J; Stevens, C J; Weidemann, G; Tillmann, B

    2016-08-01

    Implicit learning of temporal structure has primarily been reported when events within a sequence (e.g., visual-spatial locations, tones) are systematically ordered and correlated with the temporal structure. An auditory serial reaction time task was used to investigate implicit learning of temporal intervals between pseudorandomly ordered syllables. Over exposure, participants identified syllables presented in sequences with weakly metrical temporal structures. In a test block, the temporal structure differed from exposure only in the duration of the interonset intervals (IOIs) between groups. It was hypothesized that reaction time (RT) to syllables following between-group IOIs would decrease with exposure and increase at test. In Experiments 1 and 2, the sequences presented over exposure and test were counterbalanced across participants (Pattern 1 and Pattern 2 conditions). An RT increase at test to syllables following between-group IOIs was only evident in the condition that presented an exposure structure with a slightly stronger meter (Pattern 1 condition). The Pattern 1 condition also elicited a global expectancy effect: Test block RT slowed to earlier-than-expected syllables (i.e., syllables shifted to an earlier beat) but not to later-than-expected syllables. Learning of between-group IOIs and the global expectancy effect extended to the Pattern 2 condition when meter was strengthened with an external pulse (Experiment 2). Experiment 3 further demonstrated implicit learning of a new weakly metrical structure with only earlier-than-expected violations at test. Overall findings demonstrate learning of weakly metrical rhythms without correlated event structures (i.e., sequential syllable orders). They further suggest the presence of a global expectancy effect mediated by metrical strength. PMID:27301354

  19. Morphometrical Study of the Temporal Bone and Auditory Ossicles in Guinea Pig

    Directory of Open Access Journals (Sweden)

    Ahmadali Mohammadpour

    2011-03-01

    Full Text Available In this research, anatomical descriptions of the structure of the temporal bone and auditory ossicles have been performed based on dissection of ten guinea pigs. The results showed that, in guinea pig temporal bone was similar to other animals and had three parts; squamous, tympanic and petrous .The tympanic part was much better developed and consisted of oval shaped tympanic bulla with many recesses in tympanic cavity. The auditory ossicles of guinea pig concluded of three small bones; malleus, incus and stapes but the head of the malleus and the body of incus were fused and forming a malleoincudal complex. The average of morphometric parameters showed that the malleus was 3.53 ± 0.22 mm in total length. In addition to head and handle, the malleus had two distinct process; lateral and muscular. The incus had a total length 1.23 ± 0.02mm. It had long and short crus although the long crus was developed better than short crus. The lenticular bone was a round bone that articulated with the long crus of incus. The stapes had a total length 1.38 ± 0.04mm. The anterior crus(0.86 ± 0.08mm was larger than posterior crus (0.76 ± 0.08mm. It is concluded that, in the guinea pig, the malleus and the incus are fused, forming a junction called incus-malleus, while in the other animals these are separate bones. The stapes is larger and has a triangular shape and the anterior and posterior crus are thicker than other rodents. Therefore, for otological studies, the guinea pig is a good lab animal.

  20. Auditory Processing in Children with Specific Language Impairments: Are there Deficits in Frequency Discrimination, Temporal Auditory Processing or General Auditory Processing?

    OpenAIRE

    Nickisch, Andreas; Massinger, Claudia

    2009-01-01

    Background/Aims: Specific language impairment (SLI) is believed to be associated with nonverbal auditory (NVA) deficits. It remains unclear, however, whether children with SLI show deficits in auditory time processing, time processing in general, frequency discrimination (FD), or NVA processing in general. Patients and Methods: Twenty-seven children (aged 8-11) with SLI and 27 control children (CG), matched for age and gender, were retrospectively compared with regard to their performance on ...

  1. Auditory priming of frequency and temporal information: Effects of lateralized presentation

    OpenAIRE

    List, Alexandra; Justus, Timothy

    2007-01-01

    Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...

  2. Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners.

    Directory of Open Access Journals (Sweden)

    Carol Q Pham

    Full Text Available Cochlear implant (CI listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking or outside (central masking the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners.

  3. A network for sensory-motor integration: what happens in the auditory cortex during piano playing without acoustic feedback?

    Science.gov (United States)

    Baumann, Simon; Koeneke, Susan; Meyer, Martin; Lutz, Kai; Jäncke, Lutz

    2005-12-01

    Playing a musical instrument requires efficient auditory as well as motor processing. We provide evidence for the existence of a neuronal network of secondary and higher-order areas belonging to the auditory and motor modality that is important in the integration of auditory and motor domains. PMID:16597763

  4. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT

    Directory of Open Access Journals (Sweden)

    Sahar Mohammad Esmaeilzadeh

    2013-10-01

    Full Text Available Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT and play therapy (PT. There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children.   Materials and Methods: In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected.    Results: Recent technologies have brought about great advancement in the field of hearing disorders. Now these impairments can be detected at birth, and in the majority of cases, hearing impaired children can develop fluent spoken language through audition. According to researches on the relationship between hearing impaired children’s communication and language skills and different approaches of therapy, it is known that learning through listening and

  5. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  6. [The auditory pathway: levels of integration of information and principal neurotransmitters].

    Science.gov (United States)

    Hernández-Zamora, Edgar; Poblano, Adrián

    2014-01-01

    In this paper we studied the central auditory pathway (CAP) from an anatomical, physiological and neurochemical standpoint, from the inner ear, brainstem, thalamus to the temporal auditory cortex. The characteristics of the spiral ganglion of Corti, auditory nerve, cochlear nuclei, superior olivary complex, lateral lemniscus, inferior colliculus, medial geniculate body, and auditory cortex, including the auditory efferent pathway, are given. CAP is described as the electrical impulses, travelling through axons, allowing ions to enter a neuron and vesicles with neurotransmitters (NT) and then released into synaptic space. The NT changes the functioning of the cells; when attached to specific receptors on the next nerve cell, NT-receiver union causes input of ions through Gap sites, resulting in a postsynaptic potential that is spread over all CAP. In addition, the effects of the NT are not limited to the transmission, but as trophic agents that promote the formation of new neural networks. Even the anatomy, physiology, neurochemical aspects, and the different types of synapses are not fully understood to comprehend the organization of the CAP, but remain under investigation because of the relevance for the treatment of various central auditory disorders. PMID:25275847

  7. Auditory processing performance in blind people Desempenho do processamento auditivo temporal em uma população de cegos

    Directory of Open Access Journals (Sweden)

    Ludmilla Vilas Boas

    2011-08-01

    Full Text Available Hearing has an important role in human development and social adaptation in blind people. OBJECTIVE: To evaluate the performance of temporal auditory processing in blind people; to characterize the temporal resolution ability; to characterize the temporal ordinance ability and to compare the performance of the study population in the applied tests. METHODS: Fifteen blind adults participated in this study. A cross-sectional study was undertaken; approval was obtained from the Pernambuco Catholic University Ethics Committee, no. 003/2008. RESULTS: Temporal auditory processing was excellent - the average composed threshold in the original RGDT version was 4. 98 ms; it was 50 ms for all frequencies in the expanded version. PPS and DPS results ranged from 95% to 100%. There were no quantitative differences in the comparison of tests; but oral reports suggested that the original RGDT original version was more difficult. CONCLUSIONS: The study sample performed well in temporal auditory processing; it also performed well in temporal resolution and ordinance abilitiesA audição exerce um papel importantíssimo no desenvolvimento e adaptação social dos pacientes cegos. OBJETIVOS: Avaliar o desempenho do processamento temporal de cegos; caracterizar a habilidade de resolução temporal, segundo tempo e frequência; a ordenação temporal de cegos usando o teste de padrão de frequência e comparar o desempenho da população estudada para os testes de processamento aplicados. METODOLOGIA: Participaram do estudo 12 adultos portadores de cegueira. O estudo foi do tipo transversal, aprovado pelo Comitê de Ética da Universidade Católica de Pernambuco sob nº 003/2008. Para a coleta de dados, foi utilizado o RGDT em suas duas versões e os testes de padrão de duração (TPD e de frequência (TPF. RESULTADOS: Foi evidenciado excelente desempenho para o processamento temporal, média de 4,98 para o limiar composto na versão original do RGDT e 50 ms de

  8. Noise-induced hearing loss increases the temporal precision of complex envelope coding by auditory-nerve fibers

    Directory of Open Access Journals (Sweden)

    Michael Gregory Heinz

    2014-02-01

    Full Text Available While changes in cochlear frequency tuning are thought to play an important role in the perceptual difficulties of people with sensorineural hearing loss (SNHL, the possible role of temporal processing deficits remains less clear. Our knowledge of temporal envelope coding in the impaired cochlea is limited to two studies that examined auditory-nerve fiber responses to narrowband amplitude modulated stimuli. In the present study, we used Wiener-kernel analyses of auditory-nerve fiber responses to broadband Gaussian noise in anesthetized chinchillas to quantify changes in temporal envelope coding with noise-induced SNHL. Temporal modulation transfer functions (TMTFs and temporal windows of sensitivity to acoustic stimulation were computed from 2nd-order Wiener kernels and analyzed to estimate the temporal precision, amplitude, and latency of envelope coding. Noise overexposure was associated with slower (less negative TMTF roll-off with increasing modulation frequency and reduced temporal window duration. The results show that at equal stimulus sensation level, SNHL increases the temporal precision of envelope coding by 20-30%. Furthermore, SNHL increased the amplitude of envelope coding by 50% in fibers with CFs from 1-2 kHz and decreased mean response latency by 0.4 ms. While a previous study of envelope coding demonstrated a similar increase in response amplitude, the present study is the first to show enhanced temporal precision. This new finding may relate to the use of a more complex stimulus with broad frequency bandwidth and a dynamic temporal envelope. Exaggerated neural coding of fast envelope modulations may contribute to perceptual difficulties in people with SNHL by acting as a distraction from more relevant acoustic cues, especially in fluctuating background noise. Finally, the results underscore the value of studying sensory systems with more natural, real-world stimuli.

  9. Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns.

    Directory of Open Access Journals (Sweden)

    Andres Carrasco

    Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.

  10. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    OpenAIRE

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this...

  11. Development of working memory, speech perception and auditory temporal resolution in children with allention deficit hyperactivity disorder and language impairment

    OpenAIRE

    Norrelgen, Fritjof

    2002-01-01

    Speech perception (SP), verbal working memory (WM) and auditory temporal resolution (ATR) have been studied in children with attention deficit hyperactivity disorder (ADHD) and language impairment (LI), as well as in reference groups of typically developed children. A computerised method was developed, in which discrimination of same or different pairs of stimuli was tested. In a functional Magnetic Resonance Imaging (fMRI) study a similar test was used to explore the neural...

  12. Interventions To Facilitate Auditory, Visual, and Motor Integration in Autism: A Review of the Evidence.

    Science.gov (United States)

    Dawson, Geraldine; Watling, Renee

    2000-01-01

    Evidence is reviewed on the prevalence of sensory and motor abnormalities in autism and the effectiveness of three interventions designed to address such abnormalities: sensory integration therapy, traditional occupational therapy, and auditory integration training. Results of these limited studies provided no firm support for the interventions.…

  13. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT)

    OpenAIRE

    Sahar Mohammad Esmaeilzadeh; Shahla Sharifi; Hamid Tayarani Niknezhad

    2013-01-01

    Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT) is one such approach. Recently, researchers have found that music and play have a considerable effect on the communica...

  14. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  15. Auditory-olfactory integration: congruent or pleasant sounds amplify odor pleasantness.

    Science.gov (United States)

    Seo, Han-Seok; Hummel, Thomas

    2011-03-01

    Even though we often perceive odors while hearing auditory stimuli, surprisingly little is known about auditory-olfactory integration. This study aimed to investigate the influence of auditory cues on ratings of odor intensity and/or pleasantness, with a focus on 2 factors: "congruency" (Experiment 1) and the "halo/horns effect" of auditory pleasantness (Experiment 2). First, in Experiment 1, participants were presented with congruent, incongruent, or neutral sounds before and during the presentation of odor. Participants rated the odors as being more pleasant while listening to a congruent sound than while listening to an incongruent sound. In Experiment 2, participants received pleasant or unpleasant sounds before and during the presentation of either a pleasant or unpleasant odor. The hedonic valence of the sounds transferred to the odors, irrespective of the hedonic tone of the odor itself. The more the participants liked the preceding sound, the more pleasant the subsequent odor became. In contrast, the ratings of odor intensity appeared to be little or not at all influenced by the congruency or hedonic valence of the auditory cue. In conclusion, the present study for the first time provides an empirical demonstration that auditory cues can modulate odor pleasantness. PMID:21163913

  16. Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure.

    Science.gov (United States)

    Moore, Brian C J

    2016-01-01

    Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each of which can be considered as a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV and TFS is conveyed in the timing and short-term rate of nerve spikes in the auditory nerve. There is evidence that both hearing loss and increasing age adversely affect the ability to use TFS information, but in many studies the effects of hearing loss and age have been confounded. This paper summarises evidence from studies that allow some separation of the effects of hearing loss and age. The results suggest that the monaural processing of TFS information, which is important for the perception of pitch and for segregating speech from background sounds, is adversely affected by both hearing loss and increasing age, the former being more important. The monaural processing of ENV information is hardly affected by hearing loss or by increasing age. The binaural processing of TFS information, which is important for sound localisation and the binaural masking level difference, is also adversely affected by both hearing loss and increasing age, but here the latter seems more important. The deterioration of binaural TFS processing with increasing age appears to start relatively early in life. The binaural processing of ENV information also deteriorates somewhat with increasing age. The reduced binaural processing abilities found for older/hearing-impaired listeners may partially account for the difficulties that such listeners experience in situations where the target speech and interfering sounds come from different directions in space, as is common in everyday life. PMID:27080640

  17. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  18. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    OpenAIRE

    Sugihara, T.; Diltz, M. D.; Averbeck, B. B.; Romanski, L. M.

    2006-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The mul...

  19. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  20. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up-to-da...

  1. State-dependent and cell type-specific temporal processing in auditory thalamocortical circuit

    OpenAIRE

    Shuzo Sakata

    2016-01-01

    Ongoing spontaneous activity in cortical circuits defines cortical states, but it still remains unclear how cortical states shape sensory processing across cortical laminae and what type of response properties emerge in the cortex. Recording neural activity from the auditory cortex (AC) and medial geniculate body (MGB) simultaneously with electrical stimulations of the basal forebrain (BF) in urethane-anesthetized rats, we investigated state-dependent spontaneous and auditory-evoked activitie...

  2. Encoding of Temporal Information by Timing, Rate, and Place in Cat Auditory Cortex

    OpenAIRE

    Imaizumi, Kazuo; Priebe, Nicholas J.; Sharpee, Tatyana O.; Cheung, Steven W.; Schreiner, Christoph E.

    2010-01-01

    A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (...

  3. Learning-stage-dependent plasticity of temporal coherence in the auditory cortex of rats.

    Science.gov (United States)

    Yokota, Ryo; Aihara, Kazuyuki; Kanzaki, Ryohei; Takahashi, Hirokazu

    2015-05-01

    Temporal coherence among neural populations may contribute importantly to signal encoding, specifically by providing an optimal tradeoff between encoding reliability and efficiency. Here, we considered the possibility that learning modulates the temporal coherence among neural populations in association with well-characterized map plasticity. We previously demonstrated that, in appetitive operant conditioning tasks, the tone-responsive area globally expanded during the early stage of learning, but shrank during the late stage. The present study further showed that phase locking of the first spike to band-specific oscillations of local field potentials (LFPs) significantly increased during the early stage of learning but decreased during the late stage, suggesting that neurons in A1 were more synchronously activated during early learning, whereas they were more asynchronously activated once learning was completed. Furthermore, LFP amplitudes increased during early learning but decreased during later learning. These results suggest that, compared to naïve encoding, early-stage encoding is more reliable but energy-consumptive, whereas late-stage encoding is more energetically efficient. Such a learning-stage-dependent encoding strategy may underlie learning-induced, non-monotonic map plasticity. Accumulating evidence indicates that the cholinergic system is likely to be a shared neural substrate of the processes for perceptual learning and attention, both of which modulate neural encoding in an adaptive manner. Thus, a better understanding of the links between map plasticity and modulation of temporal coherence will likely lead to a more integrated view of learning and attention. PMID:24615394

  4. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-01

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. PMID:25948273

  5. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291

  6. Semantic validation in spatio-temporal schema integration

    OpenAIRE

    Sotnykova, Anastasiya; Spaccapietra, Stefano

    2007-01-01

    This thesis proposes to address the well-know database integration problem with a new method that combines functionality from database conceptual modeling techniques with functionality from logic-based reasoners. We elaborate on a hybrid - modeling+validation - integration approach for spatio-temporal information integration on the schema level. The modeling part of our methodology is supported by the spatio-temporal conceptual model MADS, whereas the validation part of the integration proces...

  7. Semantic validation in spatio-temporal schema integration

    OpenAIRE

    Sotnykova, Anastasiya

    2006-01-01

    This thesis proposes to address the well-know database integration problem with a new method that combines functionality from database conceptual modeling techniques with functionality from logic-based reasoners. We elaborate on a hybrid - modeling+validation - integration approach for spatio-temporal information integration on the schema level. The modeling part of our methodology is supported by the spatio-temporal conceptual model MADS, whereas the validation part of the integration proces...

  8. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  9. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  10. Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?

    Science.gov (United States)

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…

  11. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates that...

  12. Attention and response control in ADHD. Evaluation through integrated visual and auditory continuous performance test.

    Science.gov (United States)

    Moreno-García, Inmaculada; Delgado-Pardo, Gracia; Roldán-Blasco, Carmen

    2015-01-01

    This study assesses attention and response control through visual and auditory stimuli in a primary care pediatric sample. The sample consisted of 191 participants aged between 7 and 13 years old. It was divided into 2 groups: (a) 90 children with ADHD, according to diagnostic (DSM-IV-TR) (APA, 2002) and clinical (ADHD Rating Scale-IV) (DuPaul, Power, Anastopoulos, & Reid, 1998) criteria, and (b) 101 children without a history of ADHD. The aims were: (a) to determine and compare the performance of both groups in attention and response control, (b) to identify attention and response control deficits in the ADHD group. Assessments were carried out using the Integrated Visual and Auditory Continuous Performance Test (IVA/CPT, Sandford & Turner, 2002). Results showed that the ADHD group had visual and auditory attention deficits, F(3, 170) = 14.38; p ADHD showed inattention, mental processing speed deficits, and loss of concentration with visual stimuli. Both groups yielded a better performance in attention with auditory stimuli. PMID:25734571

  13. HIT, hallucination focused integrative treatment as early intervention in psychotic adolescents with auditory hallucinations : a pilot study

    NARCIS (Netherlands)

    Jenner, JA; van de Willige, G

    2001-01-01

    Objective: Early intervention in psychosis is considered important in relapse prevention. Limited results of monotherapies prompt to development of multimodular programmes. The present study tests feasibility and effectiveness of HIT, an integrative early intervention treatment for auditory hallucin

  14. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    Science.gov (United States)

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26707975

  15. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  16. Tracking cortical entrainment in neural activity: Auditory processes in human temporal cortex

    OpenAIRE

    Thwaites, Andrew; Nimmo-Smith, Ian; Fonteneau, Elisabeth; Patterson, Roy D.; Buttery, Paula; Marslen-Wilson, William D.

    2015-01-01

    A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two set...

  17. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  18. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention

    Science.gov (United States)

    Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-01-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  19. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention.

    Science.gov (United States)

    Ronconi, Luca; Pincham, Hannah L; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-05-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  20. 功能磁共振成像观察左颞前部在汉语听觉词加工中的机制%Functional MRI observation on auditory Chinese lexical processing mechanism in left anterior temporal lobe

    Institute of Scientific and Technical Information of China (English)

    王晓怡; 卢洁; 李坤成; 张苗; 徐国庆; 舒华

    2011-01-01

    Objective To explore the neural mechanism for auditory Chinese lexical processing in the left anterior temporal lobe (ATL) of the healthy participants with functional magnetic imaging (fMRI). Methods Fifteen right-handed healthy participants, including 5 males and 10 females, were asked to repeat the auditory words or judge whether the auditory items were semantically dangerous. AFNI was used to process fMRI data and localize functional areas and the difference in the anterior temporal lobe. Results The results revealed the phonological processing on auditory Chinese lexical information was located in the anterior superior temporal gyrus, and the semantic processing was located in the anterior middle temporal gyrus. There existed segregation between the phonological processing and the semantic processing of the auditory Chinese words. Conclusion There was the function of semantic integration in the ATL. Two pathways to semantic access include the direct pathway in the dorsal temporal lobe for repetition task and the indirect in ventral temporal lobe for semantic judgment task.%目的 探讨左侧颞前部在汉语听觉信息加工中的作用机制.方法 应用3.0T磁共振成像系统与标准头线圈对15名健康志愿者(男5名,女10名)进行功能磁共振成像(fMRI).要求受试者完成听觉复述任务和听觉语义危险判断任务.应用软件包AFNI分析两种听觉任务在左颞前部的任务功能定位及其差异.结果 正常成人听觉语义判断任务相比听觉复述任务更多激活左侧颞中回及颞下回前部,而听觉语音复述任务相比听觉语义判断任务更多激活左侧颞上回前部.结论 脑内存在左颞前部对汉语听觉语音语义信息加工的分离,颞上前部对语音分析更强,颞前中下部对语义分析更强.

  1. The time window of multisensory integration: relating reaction times and judgments of temporal order.

    Science.gov (United States)

    Diederich, Adele; Colonius, Hans

    2015-04-01

    Even though visual and auditory information of 1 and the same event often do not arrive at the sensory receptors at the same time, due to different physical transmission times of the modalities, the brain maintains a unitary perception of the event, at least within a certain range of sensory arrival time differences. The properties of this "temporal window of integration" (TWIN), its recalibration due to task requirements, attention, and other variables, have recently been investigated intensively. Up to now, however, there has been no consistent definition of "temporal window" across different paradigms for measuring its width. Here we propose such a definition based on our TWIN model (Colonius & Diederich, 2004). It applies to judgments of temporal order (or simultaneity) as well as to reaction time (RT) paradigms. Reanalyzing data from Mégevand, Molholm, Nayak, & Foxe (2013) by fitting the TWIN model to data from both paradigms, we confirmed the authors' hypothesis that the temporal window in an RT task tends to be wider than in a temporal-order judgment (TOJ) task. This first step toward a unified concept of TWIN should be a valuable tool in guiding investigations of the neural and cognitive bases of this so-far-somewhat elusive concept. PMID:25706404

  2. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration

    Directory of Open Access Journals (Sweden)

    Tiziana eVercillo

    2015-05-01

    Full Text Available The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested subjects in an attentional and a non-attentional condition. In the attention experiment participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli (conflictual or not conflictual sounds and vibrations arranged along the horizontal axis were presented sequentially. In the primary task subjects had to evaluate the position of the second stimulus (the probe with respect to the others (in a space bisection task. In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection. Our results showed enhanced auditory precision (and auditory weights in the auditory attentional condition with respect to the control non-attentional condition. Interestingly in both conditions the multisensory results are well predicted by the MLE model. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  3. Spatial interactions determine temporal feature integration as revealed by unmasking

    Directory of Open Access Journals (Sweden)

    Michael H. Herzog

    2006-01-01

    Full Text Available Feature integration is one of the most fundamental problems in neuroscience. In a recent contribution, we showed that a trailing grating can diminish the masking effects one vernier exerts on another, preceding vernier. Here, we show that this temporal unmasking depends on neural spatial interactions related to the trailing grating. Hence, our paradigm allows us to study the spatio-temporal interactions underlying feature integration.

  4. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  5. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders

    OpenAIRE

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuro-imaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such int...

  6. Temporal resolution of the Florida manatee (Trichechus manatus latirostris) auditory system.

    Science.gov (United States)

    Mann, David A; Colbert, Debborah E; Gaspard, Joseph C; Casper, Brandon M; Cook, Mandy L H; Reep, Roger L; Bauer, Gordon B

    2005-10-01

    Auditory evoked potential (AEP) measurements of two Florida manatees (Trichechus manatus latirostris) were measured in response to amplitude modulated tones. The AEP measurements showed weak responses to test stimuli from 4 kHz to 40 kHz. The manatee modulation rate transfer function (MRTF) is maximally sensitive to 150 and 600 Hz amplitude modulation (AM) rates. The 600 Hz AM rate is midway between the AM sensitivities of terrestrial mammals (chinchillas, gerbils, and humans) (80-150 Hz) and dolphins (1,000-1,200 Hz). Audiograms estimated from the input-output functions of the EPs greatly underestimate behavioral hearing thresholds measured in two other manatees. This underestimation is probably due to the electrodes being located several centimeters from the brain. PMID:16001184

  7. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. PMID:15549685

  8. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  9. Age-group differences in speech identification despite matched audiometrically normal hearing: Contributions from auditory temporal processing and cognition

    Directory of Open Access Journals (Sweden)

    Christian Füllgrabe

    2015-01-01

    Full Text Available Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60-79 years participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125-6 kHz were matched to nine young (YNH; 18-27 years participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1 identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2 identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3 detection of modulation of the temporal envelope (TE at frequencies 5-180 Hz; (4 monaural and binaural sensitivity to temporal fine structure (TFS; (5 various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (improvement in speech identification obtained by amplitude modulating a noise background and spatial masking release (benefit obtained from spatially separating masker and target speech were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric

  10. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding-the inability to see an object when it is surrounded by flankers in the periphery-does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration-the simplest kind of temporal semantic integration-did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  11. Temporal structure and complexity affect audio-visual correspondence detection

    OpenAIRE

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task re...

  12. Temporal structure and complexity affect audio-visual correspondence detection

    OpenAIRE

    Denison, Rachel N.; Jon eDriver; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task re...

  13. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    OpenAIRE

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task re...

  14. Auditory evoked potentials to spectro-temporal modulation of complex tones in normal subjects and patients with severe brain injury.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L; Stokes, M; Munday, R; Haque, N

    2000-05-01

    In order to assess higher auditory processing capabilities, long-latency auditory evoked potentials (AEPs) were recorded to synthesized musical instrument tones in 22 post-comatose patients with severe brain injury causing variably attenuated behavioural responsiveness. On the basis of normative studies, three different types of spectro-temporal modulation were employed. When a continuous 'clarinet' tone changes pitch once every few seconds, N1/P2 potentials are evoked at latencies of approximately 90 and 180 ms, respectively. Their distribution in the fronto-central region is consistent with generators in the supratemporal cortex of both hemispheres. When the pitch is modulated at a much faster rate ( approximately 16 changes/s), responses to each change are virtually abolished but potentials with similar distribution are still elicited by changing the timbre (e.g. 'clarinet' to 'oboe') every few seconds. These responses appear to represent the cortical processes concerned with spectral pattern analysis and the grouping of frequency components to form sound 'objects'. Following a period of 16/s oscillation between two pitches, a more anteriorly distributed negativity is evoked on resumption of a steady pitch. Various lines of evidence suggest that this is probably equivalent to the 'mismatch negativity' (MMN), reflecting a pre-perceptual, memory-based process for detection of change in spectro-temporal sound patterns. This method requires no off-line subtraction of AEPs evoked by the onset of a tone, and the MMN is produced rapidly and robustly with considerably larger amplitude (usually >5 microV) than that to discontinuous pure tones. In the brain-injured patients, the presence of AEPs to two or more complex tone stimuli (in the combined assessment of two authors who were 'blind' to the clinical and behavioural data) was significantly associated with the demonstrable possession of discriminative hearing (the ability to respond differentially to verbal commands

  15. Spatio-temporal data analytics for wind energy integration

    CERN Document Server

    Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief presents spatio-temporal data analytics for wind energy integration using stochastic modeling and optimization methods. It explores techniques for efficiently integrating renewable energy generation into bulk power grids. The operational challenges of wind, and its variability are carefully examined. A spatio-temporal analysis approach enables the authors to develop Markov-chain-based short-term forecasts of wind farm power generation. To deal with the wind ramp dynamics, a support vector machine enhanced Markov model is introduced. The stochastic optimization of economic di

  16. Photonic temporal integrator for all-optical computing.

    Science.gov (United States)

    Slavík, Radan; Park, Yongwoo; Ayotte, Nicolas; Doucet, Serge; Ahn, Tae-Jung; LaRochelle, Sophie; Azaña, José

    2008-10-27

    We report the first experimental realization of an all-optical temporal integrator. The integrator is implemented using an all-fiber active (gain-assisted) filter based on superimposed fiber Bragg gratings made in an Er-Yb co-doped optical fiber that behaves like an 'optical capacitor'. Functionality of this device was tested by integrating different optical pulses, with time duration down to 60 ps, and by integration of two consecutive pulses that had different relative phases, separated by up to 1 ns. The potential of the developed device for implementing all-optical computing systems for solving ordinary differential equations was also experimentally tested. PMID:18958098

  17. Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

    OpenAIRE

    Ellis, Robert J.; Zhiyan Duan; Ye Wang

    2014-01-01

    "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). A...

  18. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  19. Species specificity of temporal processing in the auditory midbrain of gray treefrogs: long-interval neurons.

    Science.gov (United States)

    Hanson, Jessica L; Rose, Gary J; Leary, Christopher J; Graham, Jalina A; Alluri, Rishi K; Vasquez-Opazo, Gustavo A

    2016-01-01

    In recently diverged gray treefrogs (Hyla chrysoscelis and H. versicolor), advertisement calls that differ primarily in pulse shape and pulse rate act as an important premating isolation mechanism. Temporally selective neurons in the anuran inferior colliculus may contribute to selective behavioral responses to these calls. Here we present in vivo extracellular and whole-cell recordings from long-interval-selective neurons (LINs) made during presentation of pulses that varied in shape and rate. Whole-cell recordings revealed that interplay between excitation and inhibition shapes long-interval selectivity. LINs in H. versicolor showed greater selectivity for slow-rise pulses, consistent with the slow-rise pulse characteristics of their calls. The steepness of pulse-rate tuning functions, but not the distributions of best pulse rates, differed between the species in a manner that depended on whether pulses had slow or fast-rise shape. When tested with stimuli representing the temporal structure of the advertisement calls of H. chrysoscelis or H. versicolor, approximately 27 % of LINs in H. versicolor responded exclusively to the latter stimulus type. The LINs of H. chrysoscelis were less selective. Encounter calls, which are produced at similar pulse rates in both species (≈5 pulses/s), are likely to be effective stimuli for the LINs of both species. PMID:26614093

  20. Comparison of LFP-Based and Spike-Based Spectro-Temporal Receptive Fields and Cross-Correlation in Cat Primary Auditory Cortex

    OpenAIRE

    Eggermont, Jos J.; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that ...

  1. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  2. Slow wave changes in amygdala to visual, auditory, and social stimuli following lesions of the inferior temporal cortex in squirrel monkey (Saimiri sciureus).

    Science.gov (United States)

    Kling, A S; Lloyd, R L; Perryman, K M

    1987-01-01

    Radiotelemetry of slow wave activity of the amygdala was recorded under a variety of conditions. Power, and the percentage of power in the delta band, increased in response to stimulation. Recordings of monkey vocalizations and slides of ethologically relevant, natural objects produced a greater increase in power than did control stimuli. The responses to auditory stimuli increased when these stimuli were presented in an unrestrained, group setting, yet the responses to the vocalizations remained greater than those following control stimuli. Both the natural auditory and visual stimuli produced a reliable hierarchy with regard to the magnitude of response. Following lesions of inferior temporal cortex, these two hierarchies are disrupted, especially in the auditory domain. Further, these same stimuli, when presented after the lesion, produced a decrease, rather than an increase, in power. Nevertheless, the power recorded from the natural stimuli was still greater than that recorded from control stimuli in that the former produced less of a decrease in power, following the lesion, than did the latter. These data, in conjunction with a parallel report on evoked potentials in the amygdala, before and after cortical lesions, lead us to conclude that sensory information, particularly auditory, available to the amygdala, following the lesion, is substantially the same, and that it is the interpretation of this information, by the amygdala, which is altered by the cortical lesion. PMID:3566692

  3. Conditioning the cochlea to facilitate survival and integration of exogenous cells into the auditory epithelium.

    Science.gov (United States)

    Park, Yong-Ho; Wilson, Kevin F; Ueda, Yoshihisa; Tung Wong, Hiu; Beyer, Lisa A; Swiderski, Donald L; Dolan, David F; Raphael, Yehoash

    2014-04-01

    The mammalian auditory epithelium (AE) cannot replace supporting cells and hair cells once they are lost. Therefore, sensorineural hearing loss associated with missing cells is permanent. This inability to regenerate critical cell types makes the AE a potential target for cell replacement therapies such as stem cell transplantation. Inserting stem cells into the AE of deaf ears is a complicated task due to the hostile, high potassium environment of the scala media in the cochlea, and the robust junctional complexes between cells in the AE that resist stem cell integration. Here, we evaluate whether temporarily reducing potassium levels in the scala media and disrupting the junctions in the AE make the cochlear environment more receptive and facilitate survival and integration of transplanted cells. We used sodium caprate to transiently disrupt the AE junctions, replaced endolymph with perilymph, and blocked stria vascularis pumps with furosemide. We determined that these three steps facilitated survival of HeLa cells in the scala media for at least 7 days and that some of the implanted cells formed a junctional contact with native AE cells. The data suggest that manipulation of the cochlear environment facilitates survival and integration of exogenously transplanted HeLa cells in the scala media. PMID:24394296

  4. Highly resolved spatial and temporal photoemission analysis of integrated circuits

    International Nuclear Information System (INIS)

    We develop an optical system for highly resolved photoemission analysis of integrated circuits. Photons emitted by switching transistors allow the operation of an integrated circuit to be observed by recording the individual photoemission acts. The ongoing feature size reduction makes the space–time-resolved detection of these extremely weak photoemissions challenging. We combine different optical and photonic solutions to achieve both a high spatial and temporal resolution in a compact analysis system. Imaging and detection modules capture photons through the substrate during normal chip operation and perform highly resolved optical analysis. We demonstrate the system capability by photoemission records of a real-world IC device. (paper)

  5. Medial temporal lobe roles in human path integration.

    Directory of Open Access Journals (Sweden)

    Naohide Yamamoto

    Full Text Available Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  6. Idealized computational models for auditory receptive fields.

    Directory of Open Access Journals (Sweden)

    Tony Lindeberg

    Full Text Available We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i enable invariance of receptive field responses under natural sound transformations and (ii ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC and primary auditory cortex (A1 of mammals.

  7. Assessment and Preservation of Auditory Nerve Integrity in the Deafened Guinea Pig

    OpenAIRE

    Ramekers, D.

    2014-01-01

    Profound hearing loss is often caused by cochlear hair cell loss. Cochlear implants (CIs) essentially replace hair cells by encoding sound and conveying the signal by means of pulsatile electrical stimulation to the spiral ganglion cells (SGCs) which form the auditory nerve. SGCs progressively degenerate following hair cell loss, as a result of lost neurotrophic signaling from the hair cells. Degeneration of the auditory nerve may compromise the ability to hear with a CI. Therefore, the first...

  8. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  9. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  10. Laevo: A Temporal Desktop Interface for Integrated Knowledge Work

    DEFF Research Database (Denmark)

    Jeuris, Steven; Houben, Steven; Bardram, Jakob

    2014-01-01

    Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual...... configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different...... states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which...

  11. The effect of spectrally and temporally altered auditory feedback on speech intonation by hard of hearing listeners

    Science.gov (United States)

    Barac-Cikoja, Dragana; Tamaki, Chizuko; Thomas, Lannie

    2003-04-01

    Eight listeners with severe to profound hearing loss read a six-sentence passage under spectrally altered and/or delayed auditory feedback. Spectral manipulation was implemented by filtering the speech signal into either one or four frequency bands, extracting respective amplitude envelope(s), and amplitude-modulating the corresponding noise band(s). Thus, the resulting auditory feedback did not preserve intonation information, although the four-band noise signal remained intelligible. The two noise conditions and the unaltered speech were each tested under the simultaneous and three delayed (50 ms, 100 ms, 200 ms) feedback conditions. Auditory feedback was presented via insert earphones at the listener's most comfortable level. Recorded speech was analyzed for the form and domain of the fundamental frequency (f0) declination, the magnitude of the sentence initial f0 peak (P1), and the fall-rise pattern of f0 at the phrasal boundaries. A significant interaction between the two feedback manipulations was found. Intonation characteristics were affected by speech delay only under the spectrally unaltered feedback: The magnitude of P1 and the slope of the f0 topline both increased with the delay. The spectral smearing diminished the fall-rise pattern within a sentence. Individual differences in the magnitude of these effects were significant.

  12. Auditory Hallucinations in Schizophrenia and Nonschizophrenia Populations : A Review and Integrated Model of Cognitive Mechanisms

    NARCIS (Netherlands)

    Waters, Flavie; Allen, Paul; Aleman, Andre; Fernyhough, Charles; Woodward, Todd S.; Badcock, Johanna C.; Barkus, Emma; Johns, Louise; Varese, Filippo; Menon, Mahesh; Vercammen, Ans; Laroi, Frank

    2012-01-01

    While the majority of cognitive studies on auditory hallucinations (AHs) have been conducted in schizophrenia (SZ), an increasing number of researchers are turning their attention to different clinical and nonclinical populations, often using SZ findings as a model for research. Recent advances deri

  13. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth Hendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  14. Gone in a Flash: Manipulation of Audiovisual Temporal Integration Using Transcranial Magnetic Stimulation

    Directory of Open Access Journals (Sweden)

    Roy eHamilton

    2013-09-01

    Full Text Available While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke, Vieth, Cottrell, and Mattingley (2012, we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams, et al., 2000. Slow repetitive (1Hz TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF, reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  15. Long-term music training tunes how the brain temporally binds signals from multiple senses

    OpenAIRE

    Lee, Hweeling; Noppeney, Uta

    2011-01-01

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics–fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisua...

  16. Auditory Processing Disorders

    Science.gov (United States)

    Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...

  17. Conditioning the Cochlea to Facilitate Survival and Integration of Exogenous Cells into the Auditory Epithelium

    OpenAIRE

    Park, Yong-Ho; Wilson, Kevin F.; Ueda, Yoshihisa; Tung Wong, Hiu; Lisa A. Beyer; Donald L. Swiderski; Dolan, David F.; Raphael, Yehoash

    2014-01-01

    The mammalian auditory epithelium (AE) cannot replace supporting cells and hair cells once they are lost. Therefore, sensorineural hearing loss associated with missing cells is permanent. This inability to regenerate critical cell types makes the AE a potential target for cell replacement therapies such as stem cell transplantation. Inserting stem cells into the AE of deaf ears is a complicated task due to the hostile, high potassium environment of the scala media in the cochlea, and the robu...

  18. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  19. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  20. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...

  1. Extended temporal integration in rapid serial visual presentation: Attentional control at Lag 1 and beyond.

    Science.gov (United States)

    Akyürek, Elkan G; Wolff, Michael J

    2016-07-01

    In the perception of target stimuli in rapid serial visual presentations, the process of temporal integration plays an important role when two targets are presented in direct succession (at Lag 1), causing them to be perceived as a singular episodic event. This has been associated with increased reversals of target order report and elevated task performance in classic paradigms. Yet, most current models of temporal attention do not incorporate a mechanism of temporal integration and it is currently an open question whether temporal integration is a factor in attentional processing: It might be an independent process, perhaps little more than a sensory sampling rate parameter, isolated to Lag 1, where it leaves the attentional dynamics otherwise unaffected. In the present study, these boundary conditions were tested. Temporal target integration was observed across sequences of three targets spanning an interval of 240ms. Integration rates furthermore depended strongly on bottom-up attentional filtering, and to a lesser degree on top-down control. The results support the idea that temporal integration is an adaptive process that is part of, or at least interacts with, the attentional system. Implications for current models of temporal attention are discussed. PMID:27155801

  2. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing. PMID:23626523

  3. Temporal integration of loudness measured using categorical loudness scaling and matching procedures

    OpenAIRE

    Valente, Daniel L.; Joshi, Suyash N.; Jesteadt, Walt

    2011-01-01

    Temporal integration of loudness of 1 kHz tones with 5 and 200 ms durations was assessed in four subjects using two loudness measurement procedures: categorical loudness scaling (CLS) and loudness matching. CLS provides a reliable and efficient procedure for collecting data on the temporal integration of loudness and previously reported nonmonotonic behavior observed at mid-sound pressure level levels is replicated with this procedure. Stimuli that are assigned to the same category are effect...

  4. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    Full Text Available Multi-electrode array recordings of spike and local field potential (LFP activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs and 492 frequency-tuning curves (FTCs based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm and the 16-40 Hz LFP (7.4 mm, whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz LFP-pair correlations showed that about 16% (9% of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  5. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Science.gov (United States)

    Eggermont, Jos J; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm) and the 16-40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex. PMID:21625385

  6. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Martinius Hauf

    2013-06-01

    Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  7. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    Directory of Open Access Journals (Sweden)

    Jordi Navarra

    Full Text Available The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1 or from different spatial positions (Experiment 2. A simultaneity judgment task (SJ was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony was obtained using temporal order judgments (TOJs instead of SJs (Experiment 3. Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading that we most frequently encounter in the outside world (e.g., while perceiving distant events.

  8. A hierarchical nest survival model integrating incomplete temporally varying covariates

    Science.gov (United States)

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  9. Event based self-supervised temporal integration for multimodal sensor data.

    Science.gov (United States)

    Barakova, Emilia I; Lourens, Tino

    2005-06-01

    A method for synergistic integration of multimodal sensor data is proposed in this paper. This method is based on two aspects of the integration process: (1) achieving synergistic integration of two or more sensory modalities, and (2) fusing the various information streams at particular moments during processing. Inspired by psychophysical experiments, we propose a self-supervised learning method for achieving synergy with combined representations. Evidence from temporal registration and binding experiments indicates that different cues are processed individually at specific time intervals. Therefore, an event-based temporal co-occurrence principle is proposed for the integration process. This integration method was applied to a mobile robot exploring unfamiliar environments. Simulations showed that integration enhanced route recognition with many perceptual similarities; moreover, they indicate that a perceptual hierarchy of knowledge about instant movement contributes significantly to short-term navigation, but that visual perceptions have bigger impact over longer intervals. PMID:15988800

  10. MR and genetics in schizophrenia: Focus on auditory hallucinations

    International Nuclear Information System (INIS)

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented

  11. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  12. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  13. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  14. Knowledge-data integration for temporal reasoning in a clinical trial system.

    Science.gov (United States)

    O'Connor, Martin J; Shankar, Ravi D; Parrish, David B; Das, Amar K

    2009-04-01

    Managing time-stamped data is essential to clinical research activities and often requires the use of considerable domain knowledge. Adequately representing and integrating temporal data and domain knowledge is difficult with the database technologies used in most clinical research systems. There is often a disconnect between the database representation of research data and corresponding domain knowledge of clinical research concepts. In this paper, we present a set of methodologies for undertaking ontology-based specification of temporal information, and discuss their application to the verification of protocol-specific temporal constraints among clinical trial activities. Our approach allows knowledge-level temporal constraints to be evaluated against operational trial data stored in relational databases. We show how the Semantic Web ontology and rule languages OWL and SWRL, respectively, can support tools for research data management that automatically integrate low-level representations of relational data with high-level domain concepts used in study design. PMID:18789876

  15. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  16. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Science.gov (United States)

    Mossbridge, Julia A; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  17. Temporal Profiles of Response Enhancement in Multisensory Integration

    OpenAIRE

    Rowland, Benjamin A.; Stein, Barry E.

    2008-01-01

    Animals have evolved multiple senses that transduce different forms of energy as a way of increasing their sensitivity to environmental events. Each sense provides a unique and independent perspective on the world, and very often a single event stimulates several of them. In order to make best use of the available information, the brain has also evolved the capacity to integrate information across the senses (“multisensory integration”). This facilitates the detection, localization, and ident...

  18. Temporal profiles of response enhancement in multisensory integration

    OpenAIRE

    Stein, Barry E.

    2008-01-01

    Animals have evolved multiple senses that transduce different forms of energy as a way of increasing their sensitivity to environmental events. Each sense provides a unique and independent perspective on the world, and very often a single event stimulates several of them. In order to make best use of the available information, the brain has also evolved the capacity to integrate information across the senses (“multisensory integration”). This facilitates the detection, localizatio...

  19. Temporal profiles of response enhancement in multisensory integration

    Directory of Open Access Journals (Sweden)

    Barry E Stein

    2008-12-01

    Full Text Available Animals have evolved multiple senses that transduce different forms of energy as a way of increasing their sensitivity to environmental events. Each sense provides a unique and independent perspective on the world, and very often a single event stimulates several of them. In order to make best use of the available information, the brain has also evolved the capacity to integrate information across the senses (“multisensory integration”. This facilitates the detection, localization, and identification of a given event, and has obvious survival value for the individual and the species. Multisensory responses in the superior colliculus (SC evidence shorter latencies and are more robust at their onset. This is the phenomenon of initial response enhancement in multisensory integration, which is believed to a real time fusion of information across the senses. The present paper reviews two recent reports describing how the timing and robustness of sensory responses changes as a consequence of multisensory integration in the model system of the SC.

  20. Temporal integration of multisensory stimuli in autism spectrum disorder: a predictive coding perspective.

    Science.gov (United States)

    Chan, Jason S; Langer, Anne; Kaiser, Jochen

    2016-08-01

    Recently, a growing number of studies have examined the role of multisensory temporal integration in people with autism spectrum disorder (ASD). Some studies have used temporal order judgments or simultaneity judgments to examine the temporal binding window, while others have employed multisensory illusions, such as the sound-induced flash illusion (SiFi). The SiFi is an illusion created by presenting two beeps along with one flash. Participants perceive two flashes if the stimulus-onset asynchrony (SOA) between the two flashes is brief. The temporal binding window can be measured by modulating the SOA between the beeps. Each of these tasks has been used to compare the temporal binding window in people with ASD and typically developing individuals; however, the results have been mixed. While temporal order and simultaneity judgment tasks have shown little temporal binding window differences between groups, studies using the SiFi have found a wider temporal binding window in ASD compared to controls. In this paper, we discuss these seemingly contradictory findings and suggest that predictive coding may be able to explain the differences between these tasks. PMID:27324803

  1. Neurophysiological indices of atypical auditory processing and multisensory integration are associated with symptom severity in autism

    OpenAIRE

    Brandwein, A.B.; Foxe, J. J.; Butler, J. S.; Frey, H.P.; Bates, J.C.; Shulman, L.; Molholm, S.

    2015-01-01

    Atypical processing and integration of sensory inputs are hypothesized to play a role in unusual sensory reactions and social-cognitive deficits in autism spectrum disorder (ASD). Reports on the relationship between objective metrics of sensory processing and clinical symptoms, however, are surprisingly sparse. Here we examined the relationship between neurophysiological assays of sensory processing and 1) autism severity and 2) sensory sensitivities, in individuals with ASD aged 6–17. Multip...

  2. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    Science.gov (United States)

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance. PMID:23924412

  3. Individual differences in auditory abilities.

    Science.gov (United States)

    Kidd, Gary R; Watson, Charles S; Gygi, Brian

    2007-07-01

    Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability. PMID:17614500

  4. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. PMID:27003546

  5. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf;

    2016-01-01

    auditory modeling. Inspired by the work of Edwards (2002), we studied the effects of DRC on a set of relatively basic outcome measures, such as forward masking functions (Glasberg and Moore, 1987) and spectral masking patterns (Moore et al., 1998), obtained at several masker levels and frequencies....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  6. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  7. Auditory Neuropathy

    Science.gov (United States)

    ... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...

  8. Facilitating Integrated Spatio-Temporal Visualization and Analysis of Heterogeneous Archaeological and Palaeoenvironmental Research Data

    Science.gov (United States)

    Willmes, C.; Brocks, S.; Hoffmeister, D.; Hütt, C.; Kürner, D.; Volland, K.; Bareth, G.

    2012-07-01

    In the context of the Collaborative Research Centre 806 "Our way to Europe" (CRC806), a research database is developed for integrating data from the disciplines of archaeology, the geosciences and the cultural sciences to facilitate integrated access to heterogeneous data sources. A practice-oriented data integration concept and its implementation is presented in this contribution. The data integration approach is based on the application of Semantic Web Technology and is applied to the domains of archaeological and palaeoenvironmental data. The aim is to provide integrated spatio-temporal access to an existing wealth of data to facilitate research on the integrated data basis. For the web portal of the CRC806 research database (CRC806-Database), a number of interfaces and applications have been evaluated, developed and implemented for exposing the data to interactive analysis and visualizations.

  9. Parameters Affecting Temporal Resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    CERN Document Server

    Mor, I; Dangendorf, V; Bar, D; Feldman, G; Goldberg, M B; Tittelmeier, K; Bromberger, B; Brandis, M; Weierganz, M

    2013-01-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the En = 1-10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation. This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system.

  10. Parameters affecting temporal resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    Science.gov (United States)

    Mor, I.; Vartsky, D.; Dangendorf, V.; Bar, D.; Feldman, G.; Goldberg, M. B.; Tittelmeier, K.; Bromberger, B.; Brandis, M.; Weierganz, M.

    2013-11-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the En = 1-10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system.

  11. Temporal integration of loudness, loudness discrimination, and the form of the loudness function

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1997-01-01

    Temporal integration for loudness of 5-kHz tones was measured as a function of level between 2 and 60 dB SL. Absolute thresholds and levels required to produce equal loudness were measured for 2-, 10-, 50- and 250-ms tones using adaptive, two interval, two alternative forced choice procedures. The...... procedure for loudness balances is new and obtained concurrent measurements for ten tone pairs in ten interleaved tracks. Each track converged at the level required to make the variable stimulus just louder than the fixed stimulus. Thus, the data yield estimates of the just noticeable difference for...... loudness level andtemporal integration for loudness. Results for four listeners show that the amount of temporal integration, defined as the level difference between equally loud short and long tones, varies markedly with level and is largest at moderate levels. The effect of level increases as the...

  12. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  13. Auditory Stream Biasing in Children with Reading Impairments

    Science.gov (United States)

    Ouimet, Tialee; Balaban, Evan

    2010-01-01

    Reading impairments have previously been associated with auditory processing differences. We examined "auditory stream biasing", a global aspect of auditory temporal processing. Children with reading impairments, control children and adults heard a 10 s long stream-bias-inducing sound sequence (a repeating 1000 Hz tone) and a test sequence (eight…

  14. Non-monotonic Temporal-Weighting Indicates a Dynamically Modulated Evidence-Integration Mechanism.

    Directory of Open Access Journals (Sweden)

    Zohar Z Bronfman

    2016-02-01

    Full Text Available Perceptual decisions are thought to be mediated by a mechanism of sequential sampling and integration of noisy evidence whose temporal weighting profile affects the decision quality. To examine temporal weighting, participants were presented with two brightness-fluctuating disks for 1, 2 or 3 seconds and were requested to choose the overall brighter disk at the end of each trial. By employing a signal-perturbation method, which deploys across trials a set of systematically controlled temporal dispersions of the same overall signal, we were able to quantify the participants' temporal weighting profile. Results indicate that, for intervals of 1 or 2 sec, participants exhibit a primacy-bias. However, for longer stimuli (3-sec the temporal weighting profile is non-monotonic, with concurrent primacy and recency, which is inconsistent with the predictions of previously suggested computational models of perceptual decision-making (drift-diffusion and Ornstein-Uhlenbeck processes. We propose a novel, dynamic variant of the leaky-competing accumulator model as a potential account for this finding, and we discuss potential neural mechanisms.

  15. Non-monotonic Temporal-Weighting Indicates a Dynamically Modulated Evidence-Integration Mechanism.

    Science.gov (United States)

    Bronfman, Zohar Z; Brezis, Noam; Usher, Marius

    2016-02-01

    Perceptual decisions are thought to be mediated by a mechanism of sequential sampling and integration of noisy evidence whose temporal weighting profile affects the decision quality. To examine temporal weighting, participants were presented with two brightness-fluctuating disks for 1, 2 or 3 seconds and were requested to choose the overall brighter disk at the end of each trial. By employing a signal-perturbation method, which deploys across trials a set of systematically controlled temporal dispersions of the same overall signal, we were able to quantify the participants' temporal weighting profile. Results indicate that, for intervals of 1 or 2 sec, participants exhibit a primacy-bias. However, for longer stimuli (3-sec) the temporal weighting profile is non-monotonic, with concurrent primacy and recency, which is inconsistent with the predictions of previously suggested computational models of perceptual decision-making (drift-diffusion and Ornstein-Uhlenbeck processes). We propose a novel, dynamic variant of the leaky-competing accumulator model as a potential account for this finding, and we discuss potential neural mechanisms. PMID:26866598

  16. Behavioral correlates of auditory streaming in rhesus macaques

    OpenAIRE

    Christison-Lagay, Kate L.; Cohen, Yale E.

    2013-01-01

    Perceptual representations of auditory stimuli (i.e., sounds) are derived from the auditory system’s ability to segregate and group the spectral, temporal, and spatial features of auditory stimuli—a process called “auditory scene analysis”. Psychophysical studies have identified several of the principles and mechanisms that underlie a listener’s ability to segregate and group acoustic stimuli. One important psychophysical task that has illuminated many of these principles and mechanisms is th...

  17. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music

    OpenAIRE

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. ...

  18. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  19. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  20. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  1. Exploring temporal and functional synchronization in integrating models: A sensitivity analysis

    Science.gov (United States)

    Belete, Getachew F.; Voinov, Alexey

    2016-05-01

    When integrating independently built models, we may encounter components that describe the same processes or groups of processes using different assumptions and formalizations. The time stepping in component models can also be very different depending upon the temporal resolution chosen. Even if this time stepping is handled outside of the components (as assumed by good practice of component building) the use of inappropriate temporal synchronization can produce either major run-time redundancy or loss of model accuracy. While components may need to be run asynchronously, finding the right times for them to communicate and exchange information becomes a challenge. We are illustrating this by experimenting with a couple of simple component models connected by means of Web services to explore how the timing of their input-output data exchange affects the performance of the overall integrated model. We have also considered how to best communicate information between components that use a different formalism for the same processes. Currently there are no generic recommendations for component synchronization but including sensitivity analysis for temporal and functional synchronization should be recommended as an essential part of integrated modeling.

  2. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    OpenAIRE

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it i...

  3. The temporal binding window for audiovisual speech: Children are like little adults.

    Science.gov (United States)

    Hillock-Dunn, Andrea; Grantham, D Wesley; Wallace, Mark T

    2016-07-29

    During a typical communication exchange, both auditory and visual cues contribute to speech comprehension. The influence of vision on speech perception can be measured behaviorally using a task where incongruent auditory and visual speech stimuli are paired to induce perception of a novel token reflective of multisensory integration (i.e., the McGurk effect). This effect is temporally constrained in adults, with illusion perception decreasing as the temporal offset between the auditory and visual stimuli increases. Here, we used the McGurk effect to investigate the development of the temporal characteristics of audiovisual speech binding in 7-24 year-olds. Surprisingly, results indicated that although older participants perceived the McGurk illusion more frequently, no age-dependent change in the temporal boundaries of audiovisual speech binding was observed. PMID:26920938

  4. Music perception and cognition following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Tramo, M J; Bharucha, J J; Musiek, F E

    1990-01-01

    We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250-8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies

  5. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  6. Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes.

    Science.gov (United States)

    Davey, James; Thompson, Hannah E; Hallam, Glyn; Karapanagiotidis, Theodoros; Murphy, Charlotte; De Caso, Irene; Krieger-Redwood, Katya; Bernhardt, Boris C; Smallwood, Jonathan; Jefferies, Elizabeth

    2016-08-15

    Making sense of the world around us depends upon selectively retrieving information relevant to our current goal or context. However, it is unclear whether selective semantic retrieval relies exclusively on general control mechanisms recruited in demanding non-semantic tasks, or instead on systems specialised for the control of meaning. One hypothesis is that the left posterior middle temporal gyrus (pMTG) is important in the controlled retrieval of semantic (not non-semantic) information; however this view remains controversial since a parallel literature links this site to event and relational semantics. In a functional neuroimaging study, we demonstrated that an area of pMTG implicated in semantic control by a recent meta-analysis was activated in a conjunction of (i) semantic association over size judgements and (ii) action over colour feature matching. Under these circumstances the same region showed functional coupling with the inferior frontal gyrus - another crucial site for semantic control. Structural and functional connectivity analyses demonstrated that this site is at the nexus of networks recruited in automatic semantic processing (the default mode network) and executively demanding tasks (the multiple-demand network). Moreover, in both task and task-free contexts, pMTG exhibited functional properties that were more similar to ventral parts of inferior frontal cortex, implicated in controlled semantic retrieval, than more dorsal inferior frontal sulcus, implicated in domain-general control. Finally, the pMTG region was functionally correlated at rest with other regions implicated in control-demanding semantic tasks, including inferior frontal gyrus and intraparietal sulcus. We suggest that pMTG may play a crucial role within a large-scale network that allows the integration of automatic retrieval in the default mode network with executively-demanding goal-oriented cognition, and that this could support our ability to understand actions and non

  7. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices.

    Science.gov (United States)

    Bidelman, Gavin M; Syed Khaja, Ameenuddin

    2014-06-20

    Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system. PMID:24793771

  8. Integrating Temporal and Spectral Features of Astronomical Data Using Wavelet Analysis for Source Classification

    CERN Document Server

    Ukwatta, T N

    2016-01-01

    Temporal and spectral information extracted from a stream of photons received from astronomical sources is the foundation on which we build understanding of various objects and processes in the Universe. Typically astronomers fit a number of models separately to light curves and spectra to extract relevant features. These features are then used to classify, identify, and understand the nature of the sources. However, these feature extraction methods may not be optimally sensitive to unknown properties of light curves and spectra. One can use the raw light curves and spectra as features to train classifiers, but this typically increases the dimensionality of the problem, often by several orders of magnitude. We overcome this problem by integrating light curves and spectra to create an abstract image and using wavelet analysis to extract important features from the image. Such features incorporate both temporal and spectral properties of the astronomical data. Classification is then performed on those abstract ...

  9. Integration of smartphones and webcam for the measure of spatio-temporal gait parameters.

    Science.gov (United States)

    Barone, V; Maranesi, E; Fioretti, S

    2014-01-01

    A very low cost prototype has been made for the spatial and temporal analysis of human movement using an integrated system of last generation smartphones and a highdefinition webcam, controlled by a laptop. The system can be used to analyze mainly planar motions in non-structured environments. In this paper, the accelerometer signal as captured by the 3D sensor embedded in one smartphone, and the position of colored markers derived by the webcam frames, are used for the computation of spatial-temporal parameters of gait. Accuracy of results is compared with that obtainable by a gold-standard instrumentation. The system is characterized by a very low cost and by a very high level of automation. It has been thought to be used by non-expert users in ambulatory settings. PMID:25571351

  10. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  11. Temporal integration of loudness, loudness discrimination, and the form of the loudness function

    OpenAIRE

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1997-01-01

    Temporal integration for loudness of 5-kHz tones was measured as a function of level between 2 and 60 dB SL. Absolute thresholds and levels required to produce equal loudness were measured for 2-, 10-, 50- and 250-ms tones using adaptive, two interval, two alternative forced choice procedures. The procedure for loudness balances is new and obtained concurrent measurements for ten tone pairs in ten interleaved tracks. Each track converged at the level required to make the variable stimulus jus...

  12. Temporal structure and complexity affect audio-visual correspondence detection

    Directory of Open Access Journals (Sweden)

    Rachel N Denison

    2013-01-01

    Full Text Available Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration.

  13. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  14. In search of an auditory engram

    Science.gov (United States)

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory. PMID:15967995

  15. Configural integration of temporal and contextual information in rats: Automated measurement in appetitive and aversive preparations.

    Science.gov (United States)

    Dumigan, Natasha M; Lin, Tzu-Ching E; Good, Mark; Honey, Robert C

    2015-06-01

    Two experiments investigated the capacity of rats to learn configural discriminations requiring integration of contextual (where) with temporal (when) information. In Experiment 1, during morning training sessions, food was delivered in context A and not in context B, whereas during afternoon sessions food was delivered in context B and not in context A. Rats acquired this discrimination over the course of 20 days. Experiment 2 employed a directly analogous aversive conditioning procedure in which footshock served in place of food. This procedure allowed the acquisition of the discrimination to be assessed through changes in activity to the contextual + temporal configurations (i.e., inactivity or freezing) and modulation of the immediate impact of footshock presentations (i.e., post-shock activity bursts). Both measures provided evidence of configural learning over the course of 12 days, with a final test showing that the presentation of footshock resulted in more post-shock activity in the nonreinforced than reinforced configurations. These behavioral effects reveal important parallels between (i) configural discrimination learning involving components allied to episodic memory and (ii) simple conditioning. PMID:25762427

  16. ePRISM: A case study in multiple proxy and mixed temporal resolution integration

    Science.gov (United States)

    Robinson, Marci M.; Dowsett, Harry J.

    2010-01-01

    As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.

  17. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  18. Visual and Auditory Synchronization Deficits Among Dyslexic Readers as Compared to Non-impaired Readers: A Cross-Correlation Algorithm Analysis

    Directory of Open Access Journals (Sweden)

    Zvia Breznitz

    2014-06-01

    Full Text Available Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing gap (Asynchrony between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time procedure where participants were asked to identify whhether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, reaction time, and Event Related Potential (ERP measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal speed of processing of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170- 240 ms after stimulus presentation, increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal speed of processing of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia.

  19. Fundamental deficits of auditory perception in Wernicke’s aphasia

    OpenAIRE

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew; Griffiths, Timothy; Sage, Karen

    2012-01-01

    Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated wit...

  20. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    BethanyPlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  1. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  2. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  3. Frequency band-importance functions for auditory and auditory-visual speech recognition

    Science.gov (United States)

    Grant, Ken W.

    2005-04-01

    In many everyday listening environments, speech communication involves the integration of both acoustic and visual speech cues. This is especially true in noisy and reverberant environments where the speech signal is highly degraded, or when the listener has a hearing impairment. Understanding the mechanisms involved in auditory-visual integration is a primary interest of this work. Of particular interest is whether listeners are able to allocate their attention to various frequency regions of the speech signal differently under auditory-visual conditions and auditory-alone conditions. For auditory speech recognition, the most important frequency regions tend to be around 1500-3000 Hz, corresponding roughly to important acoustic cues for place of articulation. The purpose of this study is to determine the most important frequency region under auditory-visual speech conditions. Frequency band-importance functions for auditory and auditory-visual conditions were obtained by having subjects identify speech tokens under conditions where the speech-to-noise ratio of different parts of the speech spectrum is independently and randomly varied on every trial. Point biserial correlations were computed for each separate spectral region and the normalized correlations are interpreted as weights indicating the importance of each region. Relations among frequency-importance functions for auditory and auditory-visual conditions will be discussed.

  4. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W.; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  5. Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory

    Science.gov (United States)

    Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L.; Miller, John W.; Voets, Natalie L.; Saindane, Amit M.; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel

    2012-01-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre-and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  6. Famous face identification in temporal lobe epilepsy: support for a multimodal integration model of semantic memory.

    Science.gov (United States)

    Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel

    2013-06-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  7. Auditory profile and high resolution CT scan in autism spectrum disorders children with auditory hypersensitivity.

    Science.gov (United States)

    Thabet, Elsaeid M; Zaghloul, Hesham S

    2013-08-01

    Autism is the third most common developmental disorder, following mental retardationand cerebral palsy. ASD children have been described more often as beingpreoccupied with or agitated by noise. The aim of this study was to evaluate theprevalence and clinical significance of semicircular canal dehiscence detected on CTimages in ASD children with intolerance to loud sounds in an attempt to find ananatomical correlate with hyperacusis.14 ASD children with auditory hypersensitivity and 15 ASD children without auditoryhypersensitivity as control group age and gender matched were submitted to historytaking, otological examination, tympanometry and acoustic reflex thresholdmeasurement. ABR was done to validate normal peripheral hearing and integrity ofauditory brain stem pathway. High resolution CT scan petrous and temporal boneimaging was performed to all participated children. All participants had normal hearingsensitivity in ABR testing. Absolute ABR peak waves of I and III showed no statisticallysignificant difference between the two groups, while absolute wave V peak andinterpeak latencies I-V and III-V were shorter in duration in study group whencompared to the control group. CT scans revealed SSCD in 4 out of 14 of the studygroup (29%), the dehiscence was bilateral in one patient and unilateral in threepatients. None of control group showed SSCD. In conclusion, we have reportedevidence that apparent hypersensitivity to auditory stimuli (short conduction time in ABR) despite the normal physiological measures in ASD children with auditoryhypersensitivity can provide a clinical clue of a possible SSCD. PMID:23580033

  8. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. PMID:26915331

  9. Postnatal development of synaptic properties of the GABAergic projection from the inferior colliculus to the auditory thalamus

    OpenAIRE

    Venkataraman, Yamini; Bartlett, Edward L.

    2013-01-01

    The development of auditory temporal processing is important for processing complex sounds as well as for acquiring reading and language skills. Neuronal properties and sound processing change dramatically in auditory cortex neurons after the onset of hearing. However, the development of the auditory thalamus or medial geniculate body (MGB) has not been well studied over this critical time window. Since synaptic inhibition has been shown to be crucial for auditory temporal processing, this st...

  10. INTEGRAL study of temporal properties of bright flares in Supergiant Fast X-ray Transients

    CERN Document Server

    Sidoli, L; Postnov, K

    2016-01-01

    We have characterized the typical temporal behaviour of the bright X-ray flares detected from the three Supergiant Fast X-ray Transients showing the most extreme transient behaviour (XTEJ1739-302, IGRJ17544-2619, SAXJ1818.6-1703). We focus here on the cumulative distributions of the waiting-time (time interval between two consecutive X-ray flares), and the duration of the hard X-ray activity (duration of the brightest phase of an SFXT outburst), as observed by INTEGRAL/IBIS in the energy band 17-50 keV. Adopting the cumulative distribution of waiting-times, it is possible to identify the typical timescale that clearly separates different outbursts, each composed by several single flares at ks timescale. This allowed us to measure the duration of the brightest phase of the outbursts from these three targets, finding that they show heavy-tailed cumulative distributions. We observe a correlation between the total energy emitted during SFXT outbursts and the time interval covered by the outbursts (defined as the ...

  11. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; R�der, Brigitte; Siebner, Hartwig R

    2013-01-01

    and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In...... the FG, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was...

  12. Processamento linguístico e processamento auditivo temporal em crianças com distúrbio específico de linguagem Linguistic and auditory temporal processing in children with specific language impairment

    OpenAIRE

    Talita Fortunato-Tavares; Caroline Nunes Rocha; Claudia Regina Furquim de Andrade; Débora Maria Befi-Lopes; Eliane Schochat; Arild Hestvik; Schwartz, Richard G.

    2009-01-01

    TEMA: diversos estudos sugerem a associação do distúrbio específico de linguagem (DEL) ao déficit no processamento auditivo. Pesquisas fornecem evidência de que a discriminação de estímulos breves estaria comprometida em crianças com DEL. Este déficit levaria a dificuldades em desenvolver habilidades fonológicas necessárias para mapear fonemas e decodificar e codificar palavras e frases efetiva e automaticamente. Entretanto, a correlação entre processamento temporal (PT)e distúrbios de lingua...

  13. Role of DARPP-32 and ARPP-21 in the Emergence of Temporal Constraints on Striatal Calcium and Dopamine Integration.

    Science.gov (United States)

    Nair, Anu G; Bhalla, Upinder S; Hellgren Kotaleski, Jeanette

    2016-09-01

    In reward learning, the integration of NMDA-dependent calcium and dopamine by striatal projection neurons leads to potentiation of corticostriatal synapses through CaMKII/PP1 signaling. In order to elicit the CaMKII/PP1-dependent response, the calcium and dopamine inputs should arrive in temporal proximity and must follow a specific (dopamine after calcium) order. However, little is known about the cellular mechanism which enforces these temporal constraints on the signal integration. In this computational study, we propose that these temporal requirements emerge as a result of the coordinated signaling via two striatal phosphoproteins, DARPP-32 and ARPP-21. Specifically, DARPP-32-mediated signaling could implement an input-interval dependent gating function, via transient PP1 inhibition, thus enforcing the requirement for temporal proximity. Furthermore, ARPP-21 signaling could impose the additional input-order requirement of calcium and dopamine, due to its Ca2+/calmodulin sequestering property when dopamine arrives first. This highlights the possible role of phosphoproteins in the temporal aspects of striatal signal transduction. PMID:27584878

  14. Task-dependent calibration of auditory spatial perception through environmental visual observation

    OpenAIRE

    Luca Brayda

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio tas...

  15. Late Maturation of Auditory Perceptual Learning

    Science.gov (United States)

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  16. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    Directory of Open Access Journals (Sweden)

    Vibhakar C Kotak

    2015-08-01

    Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.

  17. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    SygalAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  18. Representing Representation: Integration between the Temporal Lobe and the Posterior Cingulate Influences the Content and Form of Spontaneous Thought.

    Directory of Open Access Journals (Sweden)

    Jonathan Smallwood

    Full Text Available When not engaged in the moment, we often spontaneously represent people, places and events that are not present in the environment. Although this capacity has been linked to the default mode network (DMN, it remains unclear how interactions between the nodes of this network give rise to particular mental experiences during spontaneous thought. One hypothesis is that the core of the DMN integrates information from medial and lateral temporal lobe memory systems, which represent different aspects of knowledge. Individual differences in the connectivity between temporal lobe regions and the default mode network core would then predict differences in the content and form of people's spontaneous thoughts. This study tested this hypothesis by examining the relationship between seed-based functional connectivity and the contents of spontaneous thought recorded in a laboratory study several days later. Variations in connectivity from both medial and lateral temporal lobe regions was associated with different patterns of spontaneous thought and these effects converged on an overlapping region in the posterior cingulate cortex. We propose that the posterior core of the DMN acts as a representational hub that integrates information represented in medial and lateral temporal lobe and this process is important in determining the content and form of spontaneous thought.

  19. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  20. On the relations among temporal integration for loudness, loudness discrimination, and the form of the loudness function. (A)

    DEFF Research Database (Denmark)

    Poulsen, Torben; Buus, Søren; Florentine, M

    1996-01-01

    Temporal integration for loudness was measured as a function of level from 2 to 60 dB SL using 2-, 10-, 50-, and 250-ms tones at 5 kHz. The adaptive 2I,2AFC procedure converged at the level required to make the variable stimulus just louder than the fixed stimulus. Thus the data yield estimates of...... the levels required to make tones of different durations equally loud and of the just noticeable differences for loudness level. Results for four listeners with normal hearing show that the amount of temporal integration, defined as the level difference between equally loud short and long tones......, varies markedly with level and is largest at moderate levels. The effect of level increases as the duration of the short stimulus decreases and is largest for comparisons between the 2- and 250-ms tones. The loudness-level jnds are also largest at moderate levels and, contrary to traditional jnds for the...

  1. Regional heavy metal pollution in crops by integrating physiological function variability with spatio-temporal stability using multi-temporal thermal remote sensing

    Science.gov (United States)

    Liu, Meiling; Liu, Xiangnan; Zhang, Biyao; Ding, Chao

    2016-09-01

    Heavy metal stress in crops is characterized by stability in space and time, which differs from other stressors that are typically more transient (e.g., drought, pests/diseases, and mismanagement). The objective of this study is to assess regional heavy metal stress in rice by integrating physiological function variability with spatio-temporal stability based on multi-temporal thermal infrared (TIR) remote sensing images. The field in which the experiment was conducted is located in Zhuzhou City, Hunan Province, China. HJ-1B images and in-situ measured data were collected from rice growing in heavy metal contaminated soils. A stress index (SI) was devised as an indicator for the degree of heavy metal stress of the rice in different growth stages, and a time-spectrum feature space (TSFS) model was used to determine rice heavy metal stress levels. The results indicate that (i) SI is a good indicator of rice damage caused by heavy metal stress. Minimum values of SI occur in rice subject to high pollution, followed by larger SI with medium pollution and maximum SI for low pollution, for the same growth stage. (ii) SI shows some variation for different growth stages of rice, and the minimum SI occurs at the flowering stage. (iii) The TSFS model is successful at identifying rice heavy metal stress, and stress levels in rice stabilized regardless of the model being applied in the two different years. This study suggests that regional heavy metal stress in crops can be accurately detected using TIR technology, if a sensitive indicator of crop physiological function impairment is used and an effective model is selected. A combination of spectrum and spatio-temporal information appears to be a very promising method for monitoring crops with various stressors.

  2. Some related aspects of platypus electroreception: temporal integration behaviour, electroreceptive thresholds and directionality of the bill acting as an antenna.

    OpenAIRE

    Fjällbrant, T T; Manger, P. R.; Pettigrew, J D

    1998-01-01

    This paper focuses on how the electric field from the prey of the platypus is detected with respect to the questions of threshold determination and how the platypus might localize its prey. A new behaviour in response to electrical stimuli below the thresholds previously reported is presented. The platypus shows a voluntary exploratory behaviour that results from a temporal integration of a number of consecutive stimulus pulses. A theoretical analysis is given, which includes the threshold de...

  3. The Global Food Price Crisis and China-World Rice Market Integration: A Spatial-Temporal Rational Expectations Equilibrium Model

    OpenAIRE

    Liu, Xianglin; Romero-Aguilar, Randall S.; Chen, Shu-Ling; Miranda, Mario J.

    2013-01-01

    In this paper, we examine how China, the world’s largest rice producer and consumer, would affect the international rice market if it liberalized its trade in rice and became more fully integrated into the global rice market. The impacts of trade liberalization are estimated using a spatial-temporal rational expectations model of the world rice market characterized by four interdependent markets with stochastic production patterns, constant-elasticity demands, expected-profit maximizing priva...

  4. Integration of Temporal Contextual Information for Robust Acoustic Recognition of Bird Species from Real-Field Data

    Directory of Open Access Journals (Sweden)

    Iosif Mporas

    2013-06-01

    Full Text Available We report on the development of an automated acoustic bird recognizer with improved noise robustness, which is part of a long-term project, aiming at the establishment of an automated biodiversity monitoring system at the Hymettus Mountain near Athens, Greece. In particular, a typical audio processing strategy, which has been proved quite successful in various audio recognition applications, was amended with a simple and effective mechanism for integration of temporal contextual information in the decision-making process. In the present implementation, we consider integration of temporal contextual information by joint post-processing of the recognition results for a number of preceding and subsequent audio frames. In order to evaluate the usefulness of the proposed scheme on the task of acoustic bird recognition, we experimented with six widely used classifiers and a set of real-field audio recordings for two bird species which are present at the Hymettus Mountain. The highest achieved recognition accuracy obtained on the real-field data was approximately 93%, while experiments with additive noise showed significant robustness in low signal-to-noise ratio setups. In all cases, the integration of temporal contextual information was found to improve the overall accuracy of the recognizer.

  5. Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory

    OpenAIRE

    Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L; Miller, John W.; Voets, Natalie L.; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel

    2012-01-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradi...

  6. Functional and effective connectivity in an fMRI study of an auditory-related task.

    Science.gov (United States)

    Caclin, Anne; Fonlupt, Pierre

    2006-05-01

    This study investigates the sets of brain areas that are functionally connected during an auditory goal-directed task. We used a paradigm including a resting state condition and an active condition, which consisted in active listening to the footsteps of walking humans. The regional brain activity was measured using fMRI and the adjusted values of activity in brain regions involved in the task were analysed using both principal component analysis and structural equation modelling. A first set of connected areas includes regions located in Heschl's gyrus, planum temporale, posterior superior temporal sulcus (in the so-called 'social cognition' area), and parietal lobe. This network could be responsible for the perceptual integration of the auditory signal. A second set encompassing frontal regions is related to attentional control. Dorsolateral- and medial-prefrontal cortex have mutual negative influences which are similar to those described during a visual goal-directed task [T. Chaminade & P. Fonlupt (2003) Eur. J. Neurosci., 18, 675-679.]. Moreover, the dorsolateral prefrontal cortex (DLPFC) exerts a positive influence on the auditory areas during the task, as well as a strong negative influence on the visual areas. These results show that: (i) the negative influence between the medial and lateral parts of the frontal cortex during a goal-directed task is not dependent on the input modality (visual or auditory), and (ii) the DLPFC activates the pathway of the relevant sensory modality and inhibits the nonrelevant sensory modality pathway. PMID:16706860

  7. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented as a...... gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....

  8. Lateralization of auditory-cortex functions.

    Science.gov (United States)

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding. PMID:14629926

  9. Integrating spatial and temporal probabilities for the annual landslide hazard maps in Shihmen watershed, Taiwan

    OpenAIRE

    Wu, C Y; Chen, S. C.

    2013-01-01

    Landslide spatial probability, temporal probability, and landslide size probability were employed to perform landslide hazard assessment in this study. Following a screening process, landslide susceptibility-related factors included eleven intrinsic geomorphological factors and two extrinsic rainfall factors, which were evaluated as effective factors because of the higher correlation with the landslide distribution. Landslide area analysis was first employed to establish the...

  10. When Spatial and Temporal Contiguities Help the Integration in Working Memory: "A Multimedia Learning" Approach

    Science.gov (United States)

    Mammarella, Nicola; Fairfield, Beth; Di Domenico, Alberto

    2013-01-01

    Two experiments examined the effects of spatial and temporal contiguities in a working memory binding task that required participants to remember coloured objects. In Experiment 1, a black and white drawing and a corresponding phrase that indicated its colour perceptually were either near or far (spatial study condition), while in Experiment 2,…

  11. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  12. Computational speech segregation based on an auditory-inspired modulation analysis

    DEFF Research Database (Denmark)

    May, Tobias; Dau, Torsten

    2014-01-01

    A monaural speech segregation system is presented that estimates the ideal binary mask from noisy speech based on the supervised learning of amplitude modulation spectrogram (AMS) features. Instead of using linearly scaled modulation filters with constant absolute bandwidth, an auditory- inspired...... modulation filterbank with logarithmically scaled filters is employed. To reduce the dependency of the AMS features on the overall background noise level, a feature normalization stage is applied. In addition, a spectro-temporal integration stage is incorporated in order to exploit the context information...

  13. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...... filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high...

  14. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  15. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  16. Expectation and Attention in Hierarchical Auditory Prediction

    Science.gov (United States)

    Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M.; Bekinschtein, Tristan A.

    2013-01-01

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding. PMID:23825422

  17. On the relations among temporal integration for loudness, loudness discrimination, and the form of the loudness function. (A)

    OpenAIRE

    Poulsen, Torben; Buus, Søren; Florentine, M

    1996-01-01

    Temporal integration for loudness was measured as a function of level from 2 to 60 dB SL using 2-, 10-, 50-, and 250-ms tones at 5 kHz. The adaptive 2I,2AFC procedure converged at the level required to make the variable stimulus just louder than the fixed stimulus. Thus the data yield estimates of the levels required to make tones of different durations equally loud and of the just noticeable differences for loudness level. Results for four listeners with normal hearing show that the amount o...

  18. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    KAUST Repository

    Bayoumi, Maged Fouad

    2014-10-06

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.

  19. Integrating active sensing into reactive synthesis with temporal logic constraints under partial observations

    OpenAIRE

    Fu, Jie; Topcu, Ufuk

    2014-01-01

    We introduce the notion of online reactive planning with sensing actions for systems with temporal logic constraints in partially observable and dynamic environments. With incomplete information on the dynamic environment, reactive controller synthesis amounts to solving a two-player game with partial observations, which has impractically computational complexity. To alleviate the high computational burden, online replanning via sensing actions avoids solving the strategy in the reactive syst...

  20. Integration, Provenance, and Temporal Queries for Large-Scale Knowledge Bases

    OpenAIRE

    GAO, SHI

    2016-01-01

    Knowledge bases that summarize web information in RDF triples deliver many benefits, including support for natural language question answering and powerful structured queries that extract encyclopedic knowledge via SPARQL. Large scale knowledge bases grow rapidly in terms of scale and significance, and undergo frequent changes in both schema and content. Two critical problems have thus emerged: (i) how to support temporal queries that explore the history of knowledge bases or flash-back to th...

  1. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    Science.gov (United States)

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  2. Superior temporal activation in response to dynamic audio-visual emotional cues

    OpenAIRE

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2008-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audiovisual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual cues. Emotion perception research has focused on static facial cues; however, dynamic audiovisual (AV) cues mimic real-world social cues more accura...

  3. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    auditory lifestyle was correlated with self-report outcome. However, overall the predictive leverage of the various measures was moderate, with single predictors explaining only up to 19 percent of the variance in the auditory-performance measures. a)Now at CNBH, Department of Physiology, Development and...... no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal...... correlation exists between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and...

  4. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, A.L.

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  5. Neural correlates of auditory-somatosensory interaction in speech perception

    OpenAIRE

    Ito, Takayuki; Gracco, Vincent; Ostry, David J.

    2015-01-01

    Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech p...

  6. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  7. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  8. Music perception: information flow within the human auditory cortices.

    Science.gov (United States)

    Angulo-Perkins, Arafat; Concha, Luis

    2014-01-01

    Information processing of all acoustic stimuli involves temporal lobe regions referred to as auditory cortices, which receive direct afferents from the auditory thalamus. However, the perception of music (as well as speech or spoken language) is a complex process that also involves secondary and association cortices that conform a large functional network. Using different analytical techniques and stimulation paradigms, several studies have shown that certain areas are particularly sensitive to specific acoustic characteristics inherent to music (e.g., rhythm). This chapter reviews the functional anatomy of the auditory cortices, and highlights specific experiments that suggest the existence of distinct cortical networks for the perception of music and speech. PMID:25358716

  9. Visual discrimination of delayed self-generated movement reveals the temporal limit of proprioceptive-visual intermodal integration.

    Science.gov (United States)

    Jaime, Mark; O'Driscoll, Kelly; Moore, Chris

    2016-07-01

    This study examined the intermodal integration of visual-proprioceptive feedback via a novel visual discrimination task of delayed self-generated movement. Participants performed a goal-oriented task in which visual feedback was available only via delayed videos displayed on two monitors-each with different delay durations. During task performance, delay duration was varied for one of the videos in the pair relative to a standard delay, which was held constant. Participants were required to identify and use the video with the lesser delay to perform the task. Visual discrimination of the lesser-delayed video was examined under four conditions in which the standard delay was increased for each condition. A temporal limit for proprioceptive-visual intermodal integration of 3-5s was revealed by subjects' inability to reliably discriminate video pairs. PMID:27208649

  10. Robust auditory localization using probabilistic inference and coherence-based weighting of interaural cues.

    Science.gov (United States)

    Kayser, Hendrik; Hohmann, Volker; Ewert, Stephan D; Kollmeier, Birger; Anemüller, Jörn

    2015-11-01

    Robust sound source localization is performed by the human auditory system even in challenging acoustic conditions and in previously unencountered, complex scenarios. Here a computational binaural localization model is proposed that possesses mechanisms for handling of corrupted or unreliable localization cues and generalization across different acoustic situations. Central to the model is the use of interaural coherence, measured as interaural vector strength (IVS), to dynamically weight the importance of observed interaural phase (IPD) and level (ILD) differences in frequency bands up to 1.4 kHz. This is accomplished through formulation of a probabilistic model in which the ILD and IPD distributions pertaining to a specific source location are dependent on observed interaural coherence. Bayesian computation of the direction-of-arrival probability map naturally leads to coherence-weighted integration of location cues across frequency and time. Results confirm the model's validity through statistical analyses of interaural parameter values. Simulated localization experiments show that even data points with low reliability (i.e., low IVS) can be exploited to enhance localization performance. A temporal integration length of at least 200 ms is required to gain a benefit; this is in accordance with previous psychoacoustic findings on temporal integration of spatial cues in the human auditory system. PMID:26627742

  11. Action-related auditory ERP attenuation: Paradigms and hypotheses.

    Science.gov (United States)

    Horváth, János

    2015-11-11

    A number studies have shown that the auditory N1 event-related potential (ERP) is attenuated when elicited by self-induced or self-generated sounds. Because N1 is a correlate of auditory feature- and event-detection, it was generally assumed that N1-attenuation reflected the cancellation of auditory re-afference, enabled by the internal forward modeling of the predictable sensory consequences of the given action. Focusing on paradigms utilizing non-speech actions, the present review summarizes recent progress on action-related auditory attenuation. Following a critical analysis of the most widely used, contingent paradigm, two further hypotheses on the possible causes of action-related auditory ERP attenuation are presented. The attention hypotheses suggest that auditory ERP attenuation is brought about by a temporary division of attention between the action and the auditory stimulation. The pre-activation hypothesis suggests that the attenuation is caused by the activation of a sensory template during the initiation of the action, which interferes with the incoming stimulation. Although each hypothesis can account for a number of findings, none of them can accommodate the whole spectrum of results. It is suggested that a better understanding of auditory ERP attenuation phenomena could be achieved by systematic investigations of the types of actions, the degree of action-effect contingency, and the temporal characteristics of action-effect contingency representation-buildup and -deactivation. This article is part of a Special Issue entitled SI: Prediction and Attention. PMID:25843932

  12. Evidence for Integrated Visual Face and Body Representations in the Anterior Temporal Lobes.

    Science.gov (United States)

    Harry, Bronson B; Umla-Runge, Katja; Lawrence, Andrew D; Graham, Kim S; Downing, Paul E

    2016-08-01

    Research on visual face perception has revealed a region in the ventral anterior temporal lobes, often referred to as the anterior temporal face patch (ATFP), which responds strongly to images of faces. To date, the selectivity of the ATFP has been examined by contrasting responses to faces against a small selection of categories. Here, we assess the selectivity of the ATFP in humans with a broad range of visual control stimuli to provide a stronger test of face selectivity in this region. In Experiment 1, participants viewed images from 20 stimulus categories in an event-related fMRI design. Faces evoked more activity than all other 19 categories in the left ATFP. In the right ATFP, equally strong responses were observed for both faces and headless bodies. To pursue this unexpected finding, in Experiment 2, we used multivoxel pattern analysis to examine whether the strong response to face and body stimuli reflects a common coding of both classes or instead overlapping but distinct representations. On a voxel-by-voxel basis, face and whole-body responses were significantly positively correlated in the right ATFP, but face and body-part responses were not. This finding suggests that there is shared neural coding of faces and whole bodies in the right ATFP that does not extend to individual body parts. In contrast, the same approach revealed distinct face and body representations in the right fusiform gyrus. These results are indicative of an increasing convergence of distinct sources of person-related perceptual information proceeding from the posterior to the anterior temporal cortex. PMID:27054399

  13. Integrated single grating compressor for variable pulse front tilt in simultaneously spatially and temporally focused systems.

    Science.gov (United States)

    Block, Erica; Thomas, Jens; Durfee, Charles; Squier, Jeff

    2014-12-15

    A Ti:Al(3)O(2) multipass chirped pulse amplification system is outfitted with a single-grating, simultaneous spatial and temporal focusing (SSTF) compressor platform. For the first time, this novel design has the ability to easily vary the beam aspect ratio of an SSTF beam, and thus the degree of pulse-front tilt at focus, while maintaining a net zero-dispersion system. Accessible variation of pulse front tilt gives full spatiotemporal control over the intensity distribution at the focus and could lead to better understanding of effects such as nonreciprocal writing and SSTF-material interactions. PMID:25503029

  14. Integration of various data sources for transient groundwater modeling with spatio-temporally variable fluxes—Sardon study case, Spain

    Science.gov (United States)

    Lubczynski, Maciek W.; Gurwin, Jacek

    2005-05-01

    Spatio-temporal variability of recharge ( R) and groundwater evapotranspiration ( ETg) fluxes in a granite Sardon catchment in Spain (˜80 km 2) have been assessed based on integration of various data sources and methods within the numerical groundwater MODFLOW model. The data sources and methods included: remote sensing solution of surface energy balance using satellite data, sap flow measurements, chloride mass balance, automated monitoring of climate, depth to groundwater table and river discharges, 1D reservoir modeling, GIS modeling, field cartography and aerial photo interpretation, slug and pumping tests, resistivity, electromagnetic and magnetic resonance soundings. The presented study case provides not only detailed evaluation of the complexity of spatio-temporal variable fluxes, but also a complete and generic methodology of modern data acquisition and data integration in transient groundwater modeling for spatio-temporal groundwater balancing. The calibrated numerical model showed spatially variable patterns of R and ETg fluxes despite a uniform rainfall pattern. The seasonal variability of fluxes indicated: (1) R in the range of 0.3-0.5 mm/d within ˜8 months of the wet season with exceptional peaks as high as 0.9 mm/d in January and February and no recharge in July and August; (2) a year round stable lateral groundwater outflow ( Qg) in the range of 0.08-0.24 mm/d; (3) ETg=0.64, 0.80, 0.55 mm/d in the dry seasons of 1997, 1998, 1999, respectively, and stands of Quercus ilex and Quercus pyrenaica indicated flux rates of 0.40 and 0.15 mm/d, respectively. The dry season tree transpiration for the entire catchment was ˜0.16 mm/d. The availability of dry season transpiration measurements considered as root groundwater uptake ( Tg), allowed estimation of dry season catchment groundwater evaporation ( Eg) as 0.48, 0.64, 0.39 mm/d for 1997, 1998 and 1999, respectively.

  15. Parameters Affecting Temporal Resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    OpenAIRE

    Mor, I.; Vartsky, D.; Dangendorf, V.; Bar, D.; Feldman, G.; Goldberg, M B; Tittelmeier, K.; Bromberger, B.; Brandis, M.; Weierganz, M.

    2013-01-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the En = 1-10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window correspondi...

  16. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  17. Maturational differences in thalamocortical white matter microstructure and auditory evoked response latencies in autism spectrum disorders

    OpenAIRE

    Roberts, Timothy P. L.; Lanza, Matthew R.; Dell, John; Qasmieh, Saba; Hines, Katherine; Blaskey, Lisa; Zarnow, Deborah M.; Levy, Susan E; Edgar, J. Christopher; Berman, Jeffrey I.

    2013-01-01

    White matter diffusion anisotropy in the acoustic radiations was characterized as a function of development in autistic and typically developing children. Auditory-evoked neuromagnetic fields were also recorded from the same individuals and the latency of the left and right middle latency superior temporal gyrus auditory ~50ms response (M50)1 was measured. Group differences in structural and functional auditory measures were examined, as were group differences in associations between white ma...

  18. The Essential Complexity of Auditory Receptive Fields.

    Science.gov (United States)

    Thorson, Ivar L; Liénard, Jean; David, Stephen V

    2015-12-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  19. Overriding auditory attentional capture

    OpenAIRE

    Dalton, Polly; Lavie, Nilli

    2007-01-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when ...

  20. Effect of auditory integration training on the cognitive function in elderly with mild cognitive impairment%听觉统合训练对轻度认知功能障碍老人认知能力的影响

    Institute of Scientific and Technical Information of China (English)

    毛晓红; 魏秀红

    2012-01-01

    目的 探讨听觉统合训练对轻度认知功能障碍(MCI)老人认知能力的影响.方法 将入选的60例60~75岁的MCI老人随机分成训练组30例和对照组30例.训练组每天上午9:00~9:30在课题组人员指导下进行训练,每周6d,每天0.5h,持续6个月;对照组不进行训练干预.6个月后用基本认知能力测验软件进行组内及组间对照,评价两组老人认知功能变化情况.结果 6个月后基本认知能力测验中数字快速拷贝、汉字快速比较、心算答案回忆方面训练组优于对照组(P<0.05),训练组干预后优于干预前(P<0.05).结论 听觉统合训练可改善MCI老人的认知功能.%Objective To evaluate the effect of auditory integration training on the cognitive function in elderly with mild cognitive impairment(MCI). Methods Sixty elderly aged 60 to 75 years old with MCI were randomly divided into the experimental group and the control group. The elderly in the experimental group received auditory integration training for half an hour every day for six months. The patients' cognitive function was assessed by Basic Cognitive Ability Test before and after training. Results Six months after training,the patients' performance on digital copy,Chinese word comparison,mental arithmetic answer memories in the basic cognitive ability test were significantly higher in the experimental group than those of the control group (P<0.05). Conclusions Auditory integration training can improve the cognitive function of older people with MCI.

  1. Integrating Real-time and Manual Monitored Soil Moisture Data to Predict Hillslope Soil Moisture Variations with High Temporal Resolutions

    Science.gov (United States)

    Zhu, Qing; Lv, Ligang; Zhou, Zhiwen; Liao, Kaihua

    2016-04-01

    Spatial-temporal variability of soil moisture 15 has been remaining an challenge to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time soil moisture monitoring methods. This restricted the comprehensive and intensive examination of soil moisture dynamics. In this study, we aimed to integrate the manual and real-time monitored soil moisture to depict the hillslope dynamics of soil moisture with good spatial coverage and temporal resolution. Linear (stepwise multiple linear regression-SMLR) and non-linear models (support vector machines-SVM) were used to predict soil moisture at 38 manual sites (collected 1-2 times per month) with soil moisture automatically collected at three real-time monitoring sites (collected every 5 mins). By comparing the accuracies of SMLR and SVM for each manual site, optimal soil moisture prediction model of this site was then determined. Results show that soil moisture at these 38 manual sites can be reliably predicted (root mean square errorswetness index, profile curvature, and relative difference of soil moisture and its standard deviation influenced the selection of prediction model since they related to the dynamics of soil water distribution and movement. By using this approach, hillslope soil moisture spatial distributions at un-sampled times and dates were predicted after a typical rainfall event. Missing information of hillslope soil moisture dynamics was then acquired successfully. This can be benefit for determining the hot spots and moments of soil water movement, as well as designing the proper soil moisture monitoring plan at the field scale.

  2. [Central auditory prosthesis].

    Science.gov (United States)

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  3. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    OpenAIRE

    AndrewLeeBowers; DavidJenson; MeganCuellar

    2014-01-01

    Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz) and alpha (~10Hz) spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pai...

  4. Approximate information capacity of the perfect integrate-and-fire neuron using the temporal code

    Czech Academy of Sciences Publication Activity Database

    Košťál, Lubomír

    2012-01-01

    Roč. 1434, JAN 24 (2012), s. 136-141. ISSN 0006-8993. [International Workshop on Neural Coding. Limassol, 29.10.2010-03.11.2010] R&D Projects: GA MŠk(CZ) LC554; GA ČR(CZ) GAP103/11/0282 Institutional research plan: CEZ:AV0Z50110509 Keywords : integrate-and- fire neuron * information capacity Subject RIV: FH - Neurology Impact factor: 2.879, year: 2012

  5. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  6. Integrating temporal and spatial scales: Human structural network motifs across age and region-of-interest size

    CERN Document Server

    Echtermeyer, Christoph; Rotarska-Jagiela, Anna; Mohr, Harald; Uhlhaas, Peter J; Kaiser, Marcus

    2011-01-01

    Human brain networks can be characterized at different temporal or spatial scales given by the age of the subject or the spatial resolution of the neuroimaging method. Integration of data across scales can only be successful if the combined networks show a similar architecture. One way to compare networks is to look at spatial features, based on fibre length, and topological features of individual nodes where outlier nodes form single node motifs whose frequency yields a fingerprint of the network. Here, we observe how characteristic single node motifs change over age (12-23 years) and network size (414, 813, and 1615 nodes) for diffusion tensor imaging (DTI) structural connectivity in healthy human subjects. First, we find the number and diversity of motifs in a network to be strongly correlated. Second, comparing different scales, the number and diversity of motifs varied across the temporal (subject age) and spatial (network resolution) scale: certain motifs might only occur at one spatial scale or for a c...

  7. Luminance and opponent-color contributions to visual detection and adaptation and to temporal and spatial integration.

    Science.gov (United States)

    King-Smith, P E; Carden, D

    1976-07-01

    We show how the processes of visual detection and of temporal and spatial summation may be analyzed in terms of parallel luminance (achromatic) and opponent-color systems; a test flash is detected if it exceeds the threshold of either system. The spectral sensitivity of the luminance system may be determined by a flicker method, and has a single broad peak near 555 nm; the spectral sensitivity of the opponent-color system corresponds to the color recognition threshold, and has three peaks at about 440, 530, and 600 nm (on a white background). The temporal and spatial integration of the opponent-color system are generally greater than for the luminance system; further, a white background selectively depresses the sensitivity of the luminance system relative to the opponent-color system. Thus relatively large (1 degree) and long (200 msec) spectral test flashes on a white background are detected by the opponent-color system except near 570 nm; the contribution of the luminance system becomes more prominent if the size or duration of the test flash is reduced, or if the white background is extinguished. The present analysis is discussed in relation to Stiles' model of independent eta mechanisms. PMID:978286

  8. An integrated approach for potato crop intensification using temporal remote sensing data

    Science.gov (United States)

    Panigrahy, S.; Chakraborty, M.

    Temporal remote sensing data, along with soil, physiographic rainfall and temperature information, were modelled to derive a potato-growing environment index. This was used to identify the agriculture areas suitable for growing potato crops in the Bardhaman district, West Bengal, India. The crop rotation information derived from multi-data Indian Remote Sensing Satellite (IRS) LISS-I data were used to characterize the present utilization pattern of the identified area. The result showed that around 99,000 ha agricultural area was suitable for growing potato crops. At present, only 36-38% of this area is under potato cultivation. Around 15% of the area goes to summer rice, and is thus not available for potato crop cultivation. The rest of the identified area, amounting to around 45,000 ha (46-48%), lies fallow after the harvest of kharif rice. This area thus, if given priority under the potato crop intensification programme initiated by the Government of West Bengal, has a high probability of success.

  9. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  10. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  11. Temporal Cortex Activation to Audiovisual Speech in Normal-Hearing and Cochlear Implant Users Measured with Functional Near-Infrared Spectroscopy

    Science.gov (United States)

    van de Rijt, Luuk P. H.; van Opstal, A. John; Mylanus, Emmanuel A. M.; Straatman, Louise V.; Hu, Hai Yin; Snik, Ad F. M.; van Wanrooij, Marc M.

    2016-01-01

    Background: Speech understanding may rely not only on auditory, but also on visual information. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans. Nevertheless, noise (from functional MRI, fMRI) limits the usefulness in auditory experiments, and electromagnetic artifacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI). Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (fNIRS). Methods: We studied temporal cortical activation as represented by concentration changes of oxy- and deoxy-hemoglobin in four, easy-to-apply fNIRS optical channels of 33 normal-hearing adult subjects and five post-lingually deaf cochlear implant (CI) users in response to supra-threshold unisensory auditory and visual, as well as to congruent auditory-visual speech stimuli. Results: Activation effects were not visible from single fNIRS channels. However, by discounting physiological noise through reference channel subtraction (RCS), auditory, visual and audiovisual (AV) speech stimuli evoked concentration changes for all sensory modalities in both cohorts (p < 0.001). Auditory stimulation evoked larger concentration changes than visual stimuli (p < 0.001). A saturation effect was observed for the AV condition. Conclusions: Physiological, systemic noise can be removed from fNIRS signals by RCS. The observed multisensory enhancement of an auditory cortical channel can be plausibly described by a simple addition of the auditory and visual signals with saturation. PMID:26903848

  12. An MEG Study of Temporal Characteristics of Semantic Integration in Japanese Noun Phrases

    Science.gov (United States)

    Kiguchi, Hirohisa; Asakura, Nobuhiko

    Many studies of on-line comprehension of semantic violations have shown that the human sentence processor rapidly constructs a higher-order semantic interpretation of the sentence. What remains unclear, however, is the amount of time required to detect semantic anomalies while concatenating two words to form a phrase with very rapid stimuli presentation. We aimed to examine the time course of semantic integration in concatenating two words in phrase structure building, using magnetoencephalography (MEG). In the MEG experiment, subjects decided whether two words (a classifier and its corresponding noun), presented each for 66ms, form a semantically correct noun phrase. Half of the stimuli were matched pairs of classifiers and nouns. The other half were mismatched pairs of classifiers and nouns. In the analysis of MEG data, there were three primary peaks found at approximately 25ms (M1), 170ms (M2) and 250ms (M3) after the presentation of the target words. As a result, only the M3 latencies were significantly affected by the stimulus conditions. Thus, the present results indicate that the semantic integration in concatenating two words starts from approximately 250ms.

  13. Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex.

    Science.gov (United States)

    Klein, David J; Simon, Jonathan Z; Depireux, Didier A; Shamma, Shihab A

    2006-04-01

    The spectrotemporal receptive field (STRF) provides a versatile and integrated, spectral and temporal, functional characterization of single cells in primary auditory cortex (AI). In this paper, we explore the origin of, and relationship between, different ways of measuring and analyzing an STRF. We demonstrate that STRFs measured using a spectrotemporally diverse array of broadband stimuli-such as dynamic ripples, spectrotemporally white noise, and temporally orthogonal ripple combinations (TORCs)-are very similar, confirming earlier findings that the STRF is a robust linear descriptor of the cell. We also present a new deterministic analysis framework that employs the Fourier series to describe the spectrotemporal modulations contained in the stimuli and responses. Additional insights into the STRF measurements, including the nature and interpretation of measurement errors, is presented using the Fourier transform, coupled to singular-value decomposition (SVD), and variability analyses including bootstrap. The results promote the utility of the STRF as a core functional descriptor of neurons in AI. PMID:16518572

  14. Multimodal integration of time.

    Science.gov (United States)

    Bausenhart, Karin M; de la Rosa, Maria Dolores; Ulrich, Rolf

    2014-01-01

    Recent studies suggest that the accuracy of duration discrimination for visually presented intervals is strongly impaired by concurrently presented auditory intervals of different duration, but not vice versa. Because these studies rely mostly on accuracy measures, it remains unclear whether this impairment results from changes in perceived duration or rather from a decrease in perceptual sensitivity. We therefore assessed complete psychometric functions in a duration discrimination task to disentangle effects on perceived duration and sensitivity. Specifically, participants compared two empty intervals marked by either visual or auditory pulses. These pulses were either presented unimodally, or accompanied by task-irrelevant pulses in the respective other modality, which defined conflicting intervals of identical, shorter, or longer duration. Participants were instructed to base their temporal judgments solely on the task-relevant modality. Despite this instruction, perceived duration was clearly biased toward the duration of the intervals marked in the task-irrelevant modality. This was not only found for the discrimination of visual intervals, but also, to a lesser extent, for the discrimination of auditory intervals. Discrimination sensitivity, however, was similar between all multimodal conditions, and only improved compared to the presentation of unimodal visual intervals. In a second experiment, evidence for multisensory integration was even found when the task-irrelevant modality did not contain any duration information, thus excluding noncompliant attention allocation as a basis of our results. Our results thus suggest that audiovisual integration of temporally discrepant signals does not impair discrimination sensitivity but rather alters perceived duration, presumably by means of a temporal ventriloquism effect. PMID:24351985

  15. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  16. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms that...... underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  17. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults.

    Science.gov (United States)

    Anderson, Samira; Jenkins, Kimberly

    2015-11-01

    Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912

  18. Integrating environmental equity, energy and sustainability: A spatial-temporal study of electric power generation

    Science.gov (United States)

    Touche, George Earl

    The theoretical scope of this dissertation encompasses the ecological factors of equity and energy. Literature important to environmental justice and sustainability are reviewed, and a general integration of global concepts is delineated. The conceptual framework includes ecological integrity, quality human development, intra- and inter-generational equity and risk originating from human economic activity and modern energy production. The empirical focus of this study concentrates on environmental equity and electric power generation within the United States. Several designs are employed while using paired t-tests, independent t-tests, zero-order correlation coefficients and regression coefficients to test seven sets of hypotheses. Examinations are conducted at the census tract level within Texas and at the state level across the United States. At the community level within Texas, communities that host coal or natural gas utility power plants and corresponding comparison communities that do not host such power plants are tested for compositional differences. Comparisons are made both before and after the power plants began operating for purposes of assessing outcomes of the siting process and impacts of the power plants. Relationships between the compositions of the hosting communities and the risks and benefits originating from the observed power plants are also examined. At the statewide level across the United States, relationships between statewide composition variables and risks and benefits originating from statewide electric power generation are examined. Findings indicate the existence of some limited environmental inequities, but they do not indicate disparities that confirm the general thesis of environmental racism put forth by environmental justice advocates. Although environmental justice strategies that would utilize Title VI of the 1964 Civil Rights Act and the disparate impact standard do not appear to be applicable, some findings suggest potential

  19. Intrahemispheric cortico-cortical connections of the human auditory cortex.

    Science.gov (United States)

    Cammoun, Leila; Thiran, Jean Philippe; Griffa, Alessandra; Meuli, Reto; Hagmann, Patric; Clarke, Stephanie

    2015-11-01

    The human auditory cortex comprises the supratemporal plane and large parts of the temporal and parietal convexities. We have investigated the relevant intrahemispheric cortico-cortical connections using in vivo DSI tractography combined with landmark-based registration, automatic cortical parcellation and whole-brain structural connection matrices in 20 right-handed male subjects. On the supratemporal plane, the pattern of connectivity was related to the architectonically defined early-stage auditory areas. It revealed a three-tier architecture characterized by a cascade of connections from the primary auditory cortex to six adjacent non-primary areas and from there to the superior temporal gyrus. Graph theory-driven analysis confirmed the cascade-like connectivity pattern and demonstrated a strong degree of segregation and hierarchy within early-stage auditory areas. Putative higher-order areas on the temporal and parietal convexities had more widely spread local connectivity and long-range connections with the prefrontal cortex; analysis of optimal community structure revealed five distinct modules in each hemisphere. The pattern of temporo-parieto-frontal connectivity was partially asymmetrical. In conclusion, the human early-stage auditory cortical connectivity, as revealed by in vivo DSI tractography, has strong similarities with that of non-human primates. The modular architecture and hemispheric asymmetry in higher-order regions is compatible with segregated processing streams and lateralization of cognitive functions. PMID:25173473

  20. The Basal Forebrain and Motor Cortex Provide Convergent yet Distinct Movement-Related Inputs to the Auditory Cortex.

    Science.gov (United States)

    Nelson, Anders; Mooney, Richard

    2016-05-01

    Cholinergic inputs to the auditory cortex from the basal forebrain (BF) are important to auditory processing and plasticity, but little is known about the organization of these synapses onto different auditory cortical neuron types, how they influence auditory responsiveness, and their activity patterns during various behaviors. Using intersectional tracing, optogenetic circuit mapping, and in vivo calcium imaging, we found that cholinergic axons arising from the caudal BF target major excitatory and inhibitory auditory cortical cell types, rapidly modulate auditory cortical tuning, and display fast movement-related activity. Furthermore, the BF and the motor cortex-another source of movement-related activity-provide convergent input onto some of the same auditory cortical neurons. Cholinergic and motor cortical afferents to the auditory cortex display distinct activity patterns and presynaptic partners, indicating that the auditory cortex integrates bottom-up cholinergic signals related to ongoing movements and arousal with top-down information concerning impending movements and motor planning. PMID:27112494

  1. Concentric scheme of monkey auditory cortex

    Science.gov (United States)

    Kosaki, Hiroko; Saunders, Richard C.; Mishkin, Mortimer

    2003-04-01

    The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.

  2. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    ElenaVKushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.

  3. Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Potton, Anita; Birtles, Deidre; Frostick, Caroline; Moore, Derek G.

    2013-01-01

    The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants. PMID:23882240

  4. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  5. Assessing temporal uncertainties in integrated groundwater management: an opportunity for change?

    Science.gov (United States)

    Anglade, J. A.; Billen, G.; Garnier, J.

    2013-12-01

    Since the early 1990's, high levels of nitrates concentration (occasionally exceeding the European drinking standard of 50 mgNO3-/l) have been recorded in the borewells supplying Auxerres's 60.000 inhabitants water requirements. The water catchment area (86 km2) is located in a rural area dedicated to field crops production in intensive cereal farming systems based on massive inputs of synthetic fertilizers. In 1998, a co-management committee comprising Auxerre City, rural municipalities located in the water catchment area, consumers and farmers, was created as a forward-looking associative structure to achieve integrated, adaptive and sustainable management of the resource. In 2002, 18 years after the first signs of water quality degradation, multiparty negotiation led to a cooperative agreement, a contribution to assist farmers toward new practices (optimized application of fertilizers, catch crops, and buffer strips) in a form of a surcharge on consumers' water bills. The management strategy initially integrated and operating on a voluntary basis, did not rapidly deliver its promises (there was no significant decrease in the nitrates concentration). It evolved into a combination of short term palliative solutions, contractual and regulatory instruments with higher requirements. The establishment of a regulatory framework caused major tensions between stakeholders that brought about a feeling of discouragement and a lack of understanding as to the absence of results on water quality after 20 years of joint actions. At this point, the urban-rural solidarity was in danger in being undermined, so the time issue, i.e the delay between agricultural pressure changes and visible effects on water quality, was scientifically addressed and communicated to all the parties involved. First, water age dating analysis through CFC and SF6 (anthropic gas) coupled with a statistical long term analysis of agricultural evolutions revealed a residence time in the Sequanian limestones

  6. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Hannah L. Golden

    2015-01-01

    Full Text Available Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13 and age-matched healthy individuals (n = 17 underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  7. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  8. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    AndrewLeeBowers

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  9. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data.

    Science.gov (United States)

    Jenson, David; Bowers, Andrew L; Harkrider, Ashley W; Thornton, David; Cuellar, Megan; Saltuklaroglu, Tim

    2014-01-01

    Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production. PMID:25071633

  10. Overriding auditory attentional capture.

    Science.gov (United States)

    Dalton, Polly; Lavie, Nilli

    2007-02-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects. PMID:17557587

  11. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  12. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... of sound as an active component in shaping urban environments. As urban conditions spreads globally, new scales, shapes and forms of communities appear and call for new distinctions and models in the study and representation of sonic environments. Particularly so, since urban environments...

  13. Simulating electrical modulation detection thresholds using a biophysical model of the auditory nerve.

    Science.gov (United States)

    O'Brien, Gabrielle E; Imennov, Nikita S; Rubinstein, Jay T

    2016-05-01

    Modulation detection thresholds (MDTs) assess listeners' sensitivity to changes in the temporal envelope of a signal and have been shown to strongly correlate with speech perception in cochlear implant users. MDTs are simulated with a stochastic model of a population of auditory nerve fibers that has been verified to accurately simulate a number of physiologically important temporal response properties. The procedure to estimate detection thresholds has previously been applied to stimulus discrimination tasks. The population model simulates the MDT-stimulus intensity relationship measured in cochlear implant users. The model also recreates the shape of the modulation transfer function and the relationship between MDTs and carrier rate. Discrimination based on fluctuations in synchronous firing activity predicts better performance at low carrier rates, but quantitative measures of modulation coding predict better neural representation of high carrier rate stimuli. Manipulating the number of fibers and a temporal integration parameter, the width of a sliding temporal integration window, varies properties of the MDTs, such as cutoff frequency and peak threshold. These results demonstrate the importance of using a multi-diameter fiber population in modeling the MDTs and demonstrate a wider applicability of this model to simulating behavioral performance in cochlear implant listeners. PMID:27250141

  14. Synchronization with audio and visual stimuli: Exploring multisensory integration and the role of spatio-temporal information

    OpenAIRE

    Armstrong, Alan

    2014-01-01

    Information lies at the very heart of the interaction between an individual and their environment, which has led many researchers to argue that the coupling constraining rhythmic coordination is informational. In an attempt to address this informational basis for perception-action this thesis explored the specific information from a given environmental stimulus that is used to control our actions. Namely, participants in the three studies synchronized wrist-pendulum movements with auditory an...

  15. Auditory Perception, Phonological Processing, and Reading Ability/Disability.

    Science.gov (United States)

    Watson, Betty U.; Miller, Theodore K.

    1993-01-01

    This study of 94 college undergraduates, including 24 with a reading disability, found that speech perception was strongly related to 3 of 4 phonological variables, including short-term and long-term auditory memory and phoneme segmentation, which were in turn strongly related to reading. Nonverbal temporal processing was not related to any…

  16. Multimodal Lexical Processing in Auditory Cortex Is Literacy Skill Dependent

    OpenAIRE

    McNorgan, Chris; Awati, Neha; Desroches, Amy S.; Booth, James R.

    2013-01-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others d...

  17. Measuring the dynamics of neural responses in primary auditory cortex

    OpenAIRE

    Depireux, Didier A; Simon, Jonathan Z.; Shamma, Shihab A.

    1998-01-01

    We review recent developments in the measurement of the dynamics of the response properties of auditory cortical neurons to broadband sounds, which is closely related to the perception of timbre. The emphasis is on a method that characterizes the spectro-temporal properties of single neurons to dynamic, broadband sounds, akin to the drifting gratings used in vision. The method treats the spectral and temporal aspects of the response on an equal footing.

  18. Auditory Perception of Self-Similarity in Water Sounds

    OpenAIRE

    Geffen, Maria N.; Gervain, Judit; Werker, Janet F.; Magnasco, Marcelo O.

    2011-01-01

    Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale-invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003). Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. H...

  19. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  20. 2 years of integral monitoring of GRS 1915+105. II. X-ray spectro-temporal analysis

    DEFF Research Database (Denmark)

    Rodriguez, J.; Shaw, S.E.; Hannikainen, D.C.;

    2008-01-01

    state, while during the three others it showed cycles of X-ray dips and spikes (followed by radio flares). Through time-resolved spectroscopy of these cycles, we suggest that the soft X-ray spike is the trigger of the ejection. The ejected medium is then the coronal material responsible for the hard X-ray......This is the second paper presenting the results of 2 yr of monitoring of GRS 1915+105 with INTEGRAL, RXTE, and the Ryle Telescope. We present the X-ray spectral and temporal analysis of four observations showing strong radio to X-ray correlations. During one observation GRS 1915+105 was in a steady...... emission. In the steady state observation, the X-ray spectrum is indicative of the hard-intermediate state, with the presence of a relatively strong emission at 15 GHz. The X-ray spectrum is the sum of a Comptonized component and an extra power law extending to energies > 200 keV without any evidence for a...

  1. Two Years of INTEGRAL monitoring of GRS 1915+105 Part 2: X-Ray Spectro-Temporal Analysis

    CERN Document Server

    Rodríguez, J; Hannikainen, D C; Belloni, T; Corbel, S; Bel, M Cadolle; Chenevez, J; Prat, L; Kretschmar, P; Lehto, H J; Mirabel, I F; Paizis, A; Pooley, G; Tagger, M; Varniere, P; Cabanac, C; Vilhu, O

    2007-01-01

    (abridged) This is the second paper presenting the results of two years of monitoring of GRS 1915+105 with \\integral and \\rxte and the Ryle Telescope. We present the X-ray spectral and temporal analysis of four observations which showed strong radio to X-ray correlations. During one observation GRS 1915+105 was in a steady state, while during the three others it showed cycles of X-ray dips and spikes (followed by radio flares). We present the time-resolved spectroscopy of these cyclesand show that in all cases the hard X-ray component (the Comptonized emission from a coronal medium) is suppressed in coincidence with a soft X-ray spike that ends the cycle. We interpret these results as evidence that the soft X-ray spike is the trigger of the ejection, and that the ejected medium is the coronal material. In the steady state observation, the X-ray spectrum is indicative of the hard-intermediate state, with the presence of a relatively strong emission at 15 GHz. The X-ray spectra are the sum of a Comptonized comp...

  2. Integration of temporal subtraction and nodule detection system for digital chest radiographs into picture archiving and communication system (PACS): four-year experience.

    Science.gov (United States)

    Sakai, Shuji; Yabuuchi, Hidetake; Matsuo, Yoshio; Okafuji, Takashi; Kamitani, Takeshi; Honda, Hiroshi; Yamamoto, Keiji; Fujiwara, Keiichi; Sugiyama, Naoki; Doi, Kunio

    2008-03-01

    Since May 2002, temporal subtraction and nodule detection systems for digital chest radiographs have been integrated into our hospital's picture archiving and communication systems (PACS). Image data of digital chest radiographs were stored in PACS with the digital image and communication in medicine (DICOM) protocol. Temporal subtraction and nodule detection images were produced automatically in an exclusive server and delivered with current and previous images to the work stations. The problems that we faced and the solutions that we arrived at were analyzed. We encountered four major problems. The first problem, as a result of the storage of the original images' data with the upside-down, reverse, or lying-down positioning on portable chest radiographs, was solved by postponing the original data storage for 30 min. The second problem, the variable matrix sizes of chest radiographs obtained with flat-panel detectors (FPDs), was solved by improving the computer algorithm to produce consistent temporal subtraction images. The third problem, the production of temporal subtraction images of low quality, could not be solved fundamentally when the original images were obtained with different modalities. The fourth problem, an excessive false-positive rate on the nodule detection system, was solved by adjusting this system to chest radiographs obtained in our hospital. Integration of the temporal subtraction and nodule detection system into our hospital's PACS was customized successfully; this experience may be helpful to other hospitals. PMID:17333415

  3. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. PMID:26332785

  4. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app. PMID:26110121

  5. Auditory rhythmic cueing in movement rehabilitation: findings and possible mechanisms

    Science.gov (United States)

    Schaefer, Rebecca S.

    2014-01-01

    Moving to music is intuitive and spontaneous, and music is widely used to support movement, most commonly during exercise. Auditory cues are increasingly also used in the rehabilitation of disordered movement, by aligning actions to sounds such as a metronome or music. Here, the effect of rhythmic auditory cueing on movement is discussed and representative findings of cued movement rehabilitation are considered for several movement disorders, specifically post-stroke motor impairment, Parkinson's disease and Huntington's disease. There are multiple explanations for the efficacy of cued movement practice. Potentially relevant, non-mutually exclusive mechanisms include the acceleration of learning; qualitatively different motor learning owing to an auditory context; effects of increased temporal skills through rhythmic practices and motivational aspects of musical rhythm. Further considerations of rehabilitation paradigm efficacy focus on specific movement disorders, intervention methods and complexity of the auditory cues. Although clinical interventions using rhythmic auditory cueing do not show consistently positive results, it is argued that internal mechanisms of temporal prediction and tracking are crucial, and further research may inform rehabilitation practice to increase intervention efficacy. PMID:25385780

  6. Early visual deprivation severely compromises the auditory sense of space in congenitally blind children.

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-06-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record PMID:27228448

  7. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole. PMID:20881120

  8. Functional MR imaging of cerebral auditory cortex with linguistic and non-linguistic stimulation: preliminary study

    International Nuclear Information System (INIS)

    To obtain preliminary data for understanding the central auditory neural pathway by means of functional MR imaging (fMRI) of the cerebral auditory cortex during linguistic and non-linguistic auditory stimulation. In three right-handed volunteers we conducted fMRI of auditory cortex stimulation at 1.5 T using a conventional gradient-echo technique (TR/TE/flip angle: 80/60/40 deg). Using a pulsed tone of 1000 Hz and speech as non-linguistic and linguistic auditory stimuli, respectively, images-including those of the superior temporal gyrus of both hemispheres-were obtained in sagittal plases. Both stimuli were separately delivered binaurally or monoaurally through a plastic earphone. Images were activated by processing with homemade software. In order to analyze patterns of auditory cortex activation according to type of stimulus and which side of the ear was stimulated, the number and extent of activated pixels were compared between both temporal lobes. Biaural stimulation led to bilateral activation of the superior temporal gyrus, while monoaural stimulation led to more activation in the contralateral temporal lobe than in the ipsilateral. A trend toward slight activation of the left (dominant) temporal lobe in ipsilateral stimulation, particularly with a linguistic stimulus, was observed. During both biaural and monoaural stimulation, a linguistic stimulus produced more widespread activation than did a non-linguistic one. The superior temporal gyri of both temporal lobes are associated with acoustic-phonetic analysis, and the left (dominant) superior temporal gyrus is likely to play a dominant role in this processing. For better understanding of physiological and pathological central auditory pathways, further investigation is needed

  9. Long-term music training tunes how the brain temporally binds signals from multiple senses.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2011-12-20

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics-fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry. Critically, the premotor asynchrony effects predicted musicians' perceptual sensitivity to audiovisual asynchrony. Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds. This internal forward model furnishes more precise estimates of the relative audiovisual timings and hence, stronger prediction error signals specifically for asynchronous music in a premotor-cerebellar circuitry. Our findings show intimate links between action production and audiovisual temporal binding in perception. PMID:22114191

  10. Auditory Learning. Dimensions in Early Learning Series.

    Science.gov (United States)

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  11. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  12. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  13. Auditory processing efficiency deficits in children with developmental language impairments

    Science.gov (United States)

    Hartley, Douglas E. H.; Moore, David R.

    2002-12-01

    The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.

  14. Visual–auditory spatial processing in auditory cortical neurons

    OpenAIRE

    Bizley, Jennifer K.; King, Andrew J

    2008-01-01

    Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...

  15. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  16. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  17. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  18. The auditory characteristics of children with inner auditory canal stenosis.

    Science.gov (United States)

    Ai, Yu; Xu, Lei; Li, Li; Li, Jianfeng; Luo, Jianfen; Wang, Mingming; Fan, Zhaomin; Wang, Haibo

    2016-07-01

    Conclusions This study shows that the prevalence of auditory neuropathy spectrum disorder (ANSD) in the children with inner auditory canal (IAC) stenosis is much higher than those without IAC stenosis, regardless of whether they have other inner ear anomalies. In addition, the auditory characteristics of ANSD with IAC stenosis are significantly different from those of ANSD without any middle and inner ear malformations. Objectives To describe the auditory characteristics in children with IAC stenosis as well as to examine whether the narrow inner auditory canal is associated with ANSD. Method A total of 21 children, with inner auditory canal stenosis, participated in this study. A series of auditory tests were measured. Meanwhile, a comparative study was conducted on the auditory characteristics of ANSD, based on whether the children were associated with isolated IAC stenosis. Results Wave V in the ABR was not observed in all the patients, while cochlear microphonic (CM) response was detected in 81.1% ears with stenotic IAC. Sixteen of 19 (84.2%) ears with isolated IAC stenosis had CM response present on auditory brainstem responses (ABR) waveforms. There was no significant difference in ANSD characteristics between the children with and without isolated IAC stenosis. PMID:26981851

  19. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. PMID:23664946

  20. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  1. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex.

    Science.gov (United States)

    Michalka, Samantha W; Rosen, Maya L; Kong, Lingqiang; Shinn-Cunningham, Barbara G; Somers, David C

    2016-03-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  2. A Transient Auditory Signal Shifts the Perceived Offset Position of a Moving Visual Object

    Directory of Open Access Journals (Sweden)

    Sung-EnChien

    2013-02-01

    Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.

  3. A transient auditory signal shifts the perceived offset position of a moving visual object.

    Science.gov (United States)

    Chien, Sung-En; Ono, Fuminori; Watanabe, Katsumi

    2013-01-01

    Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964). In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived visual offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived visual offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes. PMID:23439729

  4. 不同时距条件下面孔表情知觉的时间整合效应%Temporal Integration Effects in Facial Expression Recognition in Different Temporal Duration Condition

    Institute of Scientific and Technical Information of China (English)

    陈本友; 黄希庭

    2012-01-01

    通过把面孔表情分割成三部分,按照不同的时间间隔以及不同的呈现时间相继呈现,考察了被试对面孔表情的时间整合效果,以此探讨时间整合的加工过程和影响因素。结果发现:(1)面孔表情的时间整合效果受时间结构和刺激材料的影响。(2)分离呈现的面孔表情能否进行时间整合与S0A的大小有关。(3)面孔表情的时间整合存在类型差异。(4)面孔表情的时间整合是在一个有限的视觉缓冲器内进行的,图像记忆和长时记忆与面孔表情的时间整合过程关系密切。%Temporal integration is the process of perception processing,in which the successively separated stimuli are combined into a significant representation.It is a complicated process,which is known to be influenced by multiple factors,such as the temporal structure and stimulus components.Although this process has been explored in inter-stimulus interval in face perception,little is known about the temporal integration effect in facial expression recognition.More importantly,there has been no relevant evidence demonstrating that stimulus duration and stimulus category can affect the temporal integration of facial expression. In the present study,the part-whole judgment task was used to examine the influencing factors of temporal integration in the facial expression.In two experiments,each of three whole facial expression pictures was segmented into three parts,and each including a salient facial feature:eye,nose,or mouth.These parts were presented sequentially to the participants by some interval or presentation durations, with a fixed sequence:eye part first,nose followed,and mouth last.Following the last part,a mask,which eliminated effects of afterimages or other types of visual persistence was displayed.Then,participants were asked to judge the category of the facial expression, by pressing one of three number keys;"1","2" and "3",corresponding to anger,happy and

  5. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue;

    2006-01-01

    temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features of......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize that it is the...... audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p...

  6. High average power Yb:CaF2 femtosecond amplifier with integrated simultaneous spatial and temporal focusing for laser material processing

    Science.gov (United States)

    Squier, J.; Thomas, J.; Block, E.; Durfee, C.; Backus, S.

    2014-01-01

    A watt level, 10-kz repetition rate chirped pulse amplification system that has an integrated simultaneous spatial and temporal focusing (SSTF) processing system is demonstrated for the first time. SSTF significantly reduces nonlinear effects normally detrimental to beam control enabling the use of a low numerical aperture focus to quickly treat optically transparent materials over a large area. The integrated SSTF system has improved efficiency compared to previously reported SSTF designs, which combined with the high-repetition rate of the laser, further optimizes its capability to provide rapid, large volume processing.

  7. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  8. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  9. Unraveling the Biology of Auditory Learning: A Cognitive-Sensorimotor-Reward Framework.

    Science.gov (United States)

    Kraus, Nina; White-Schwoch, Travis

    2015-11-01

    The auditory system is stunning in its capacity for change: a single neuron can modulate its tuning in minutes. Here we articulate a conceptual framework to understand the biology of auditory learning where an animal must engage cognitive, sensorimotor, and reward systems to spark neural remodeling. Central to our framework is a consideration of the auditory system as an integrated whole that interacts with other circuits to guide and refine life in sound. Despite our emphasis on the auditory system, these principles may apply across the nervous system. Understanding neuroplastic changes in both normal and impaired sensory systems guides strategies to improve everyday communication. PMID:26454481

  10. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  11. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  12. Facilitating Speech, Language and Auditory Training through Tap Dancing and Creative Movement.

    Science.gov (United States)

    Leung, Katherine

    The paper describes specific techniques developed for utilizing tap dance and creative movement to facilitate speech, language, and auditory training in children with communication disorders. The integration of auditory training with speech and language activities is noted, as is the importance of incorporating movement, sound, rhythm, and…

  13. Auditory and speech processing and reading development in Chinese school children: behavioural and ERP evidence.

    Science.gov (United States)

    Meng, Xiangzhi; Sai, Xiaoguang; Wang, Cixin; Wang, Jue; Sha, Shuying; Zhou, Xiaolin

    2005-11-01

    By measuring behavioural performance and event-related potentials (ERPs) this study investigated the extent to which Chinese school children's reading development is influenced by their skills in auditory, speech, and temporal processing. In Experiment 1, 102 normal school children's performance in pure tone temporal order judgment, tone frequency discrimination, temporal interval discrimination and composite tone pattern discrimination was measured. Results showed that children's auditory processing skills correlated significantly with their reading fluency, phonological awareness, word naming latency, and the number of Chinese characters learned. Regression analyses found that tone temporal order judgment, temporal interval discrimination and composite tone pattern discrimination could account for 32% of variance in phonological awareness. Controlling for the effect of phonological awareness, auditory processing measures still contributed significantly to variance in reading fluency and character naming. In Experiment 2, mismatch negativities (MMN) in event-related brain potentials were recorded from dyslexic children and the matched normal children, while these children listened passively to Chinese syllables and auditory stimuli composed of pure tones. The two groups of children did not differ in MMN to stimuli deviated in pure tone frequency and Chinese lexical tones. But dyslexic children showed smaller MMN to stimuli deviated in initial consonants or vowels of Chinese syllables and to stimuli deviated in temporal information of composite tone patterns. These results suggested that Chinese dyslexic children have deficits in auditory temporal processing as well as in linguistic processing and that auditory and temporal processing is possibly as important to reading development of children in a logographic writing system as in an alphabetic system. PMID:16355749

  14. Automatically detecting auditory P300 in several trials

    Institute of Scientific and Technical Information of China (English)

    莫少锋; 汤井田; 陈洪波

    2015-01-01

    A method was demonstrated based on Infomax independent component analysis (Infomax ICA) for automatically extracting auditory P300 signals within several trials. A signaling equilibrium algorithm was proposed to enhance the effectiveness of the Infomax ICA decomposition. After the mixed signal was decomposed by Infomax ICA, the independent component (IC) used in auditory P300 reconstruction was automatically chosen by using the standard deviation of the fixed temporal pattern. And the result of auditory P300 was reconstructed using the selected ICs. The experimental results show that the auditory P300 can be detected automatically within five trials. The Pearson correlation coefficient between the standard signal and the signal detected using the proposed method is significantly greater than that between the standard signal and the signal detected using the average method within five trials. The wave pattern result obtained using the proposed algorithm is better and more similar to the standard signal than that obtained by the average method for the same number of trials. Therefore, the proposed method can automatically detect the effective auditory P300 within several trials.

  15. Neuromagnetic evidence for early auditory restoration of fundamental pitch.

    Directory of Open Access Journals (Sweden)

    Philip J Monahan

    Full Text Available BACKGROUND: Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset. METHODOLOGY: Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz, while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz, such that the restored fundamental (also knows as "virtual pitch" changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component. PRINCIPAL FINDINGS: We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies. CONCLUSIONS: Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.

  16. Hypermnesia using auditory input.

    Science.gov (United States)

    Allen, J

    1992-07-01

    The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564

  17. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  18. The Perception of Auditory Motion.

    Science.gov (United States)

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  19. Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-06-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability -100 and 300ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets. PMID:27094850

  20. Cholinergic modulation of auditory steady-state response in the auditory cortex of the freely moving rat.

    Science.gov (United States)

    Zhang, J; Ma, L; Li, W; Yang, P; Qin, L

    2016-06-01

    As disturbance in auditory steady-state response (ASSR) has been consistently found in many neuropsychiatric disorders, such as autism spectrum disorder and schizophrenia, there is considerable interest in the development of translational rat models to elucidate the underlying neural and neurochemical mechanisms involved in ASSR. This is the first study to investigate the effects of the non-selective muscarinic antagonist scopolamine and the cholinesterase inhibitor donepezil (also in combination with scopolamine) on ASSR. We recorded the local field potentials through the chronic microelectrodes implanted in the auditory cortex of freely moving rat. ASSRs were recorded in response to auditory stimuli delivered over a range of frequencies (10-80Hz) and averaged over 60 trials. We found that a single dose of scopolamine produced a temporal attenuation in response to auditory stimuli; the most attenuation occurred at 40Hz. Time-frequency analysis revealed deficits in both power and phase-locking to 40Hz. Donepezil augmented 40-Hz steady-state power and phase-locking. Scopolamine combined with donepezil had an enhanced effect on the phase-locking, but not power of ASSR. These changes induced by cholinergic drugs suggest an involvement of muscarinic neurotransmission in auditory processing and provide a rodent model investigating the neurochemical mechanism of neurophysiological deficits seen in patients. PMID:26964684

  1. Ectopic external auditory canal and ossicular formation in the oculo-auriculo-vertebral spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Supakul, Nucharin [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ramathibodi Hospital, Mahidol University, Department of Diagnostic and Therapeutic Radiology, Bangkok (Thailand); Kralik, Stephen F. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ho, Chang Y. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Riley Children' s Hospital, MRI Department, Indianapolis, IN (United States)

    2015-07-15

    Ear abnormalities in oculo-auricular-vertebral spectrum commonly present with varying degrees of external and middle ear atresias, usually in the expected locations of the temporal bone and associated soft tissues, without ectopia of the external auditory canal. We present the unique imaging of a 4-year-old girl with right hemifacial microsomia and ectopic location of an atretic external auditory canal, terminating in a hypoplastic temporomandibular joint containing bony structures with the appearance of auditory ossicles. This finding suggests an early embryological dysfunction involving Meckel's cartilage of the first branchial arch. (orig.)

  2. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  3. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  4. Functional topography of converging visual and auditory inputs to neurons in the rat superior colliculus.

    Science.gov (United States)

    Skaliora, Irini; Doubell, Timothy P; Holmes, Nicholas P; Nodal, Fernando R; King, Andrew J

    2004-11-01

    We have used a slice preparation of the infant rat midbrain to examine converging inputs onto neurons in the deeper multisensory layers of the superior colliculus (dSC). Electrical stimulation of the superficial visual layers (sSC) and of the auditory nucleus of the brachium of the inferior colliculus (nBIC) evoked robust monosynaptic responses in dSC cells. Furthermore, the inputs from the sSC were found to be topographically organized as early as the second postnatal week and thus before opening of the eyes and ear canals. This precocious topography was found to be sculpted by GABAA-mediated inhibition of a more widespread set of connections. Tracer injections in the nBIC, both in coronal slices as well as in hemisected brains, confirmed a robust projection originating in the nBIC with distinct terminals in the proximity of the cell bodies of dSC neurons. Combined stimulation of the sSC and nBIC sites revealed that the presumptive visual and auditory inputs are summed linearly. Finally, whereas either input on its own could manifest a significant degree of paired-pulse facilitation, temporally offset stimulation of the two sites revealed no synaptic interactions, indicating again that the two inputs function independently. Taken together, these data provide the first detailed intracellular analysis of convergent sensory inputs onto dSC neurons and form the basis for further exploration of multisensory integration and developmental plasticity. PMID:15229210

  5. Effects of Auditory Rhythm and Music on Gait Disturbances in Parkinson's Disease.

    Science.gov (United States)

    Ashoori, Aidin; Eagleman, David M; Jankovic, Joseph

    2015-01-01

    Gait abnormalities, such as shuffling steps, start hesitation, and freezing, are common and often incapacitating symptoms of Parkinson's disease (PD) and other parkinsonian disorders. Pharmacological and surgical approaches have only limited efficacy in treating these gait disorders. Rhythmic auditory stimulation (RAS), such as playing marching music and dance therapy, has been shown to be a safe, inexpensive, and an effective method in improving gait in PD patients. However, RAS that adapts to patients' movements may be more effective than rigid, fixed-tempo RAS used in most studies. In addition to auditory cueing, immersive virtual reality technologies that utilize interactive computer-generated systems through wearable devices are increasingly used for improving brain-body interaction and sensory-motor integration. Using multisensory cues, these therapies may be particularly suitable for the treatment of parkinsonian freezing and other gait disorders. In this review, we examine the affected neurological circuits underlying gait and temporal processing in PD patients and summarize the current studies demonstrating the effects of RAS on improving these gait deficits. PMID:26617566

  6. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323

  7. Introduction to auditory perception in listeners with hearing losses

    Science.gov (United States)

    Florentine, Mary; Buus, Søren

    2003-04-01

    Listeners with hearing losses cannot hear low-level sounds. In addition, they often complain that audible sounds do not have a comfortable loudness, lack clarity, and are difficult to hear in the presence of other sounds. In particular, they have difficulty understanding speech in background noise. The mechanisms underlying these complaints are not completely understood, but hearing losses are known to alter many aspects of auditory processing. This presentation highlights alterations in audibility, loudness, pitch, spectral and temporal processes, and binaural hearing that may result from hearing losses. The changes in these auditory processes can vary widely across individuals with seemingly similar amounts of hearing loss. For example, two listeners with nearly identical thresholds can differ in their ability to process spectral and temporal features of sounds. Such individual differences make rehabilitation of hearing losses complex. [Work supported by NIH/NIDCD.

  8. Losing the beat: deficits in temporal coordination

    Science.gov (United States)

    Palmer, Caroline; Lidji, Pascale; Peretz, Isabelle

    2014-01-01

    Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961–969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception–action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model's parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals. PMID:25385783

  9. Losing the beat: deficits in temporal coordination.

    Science.gov (United States)

    Palmer, Caroline; Lidji, Pascale; Peretz, Isabelle

    2014-12-19

    Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961-969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception-action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model's parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals. PMID:25385783

  10. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  11. The effect of exogenous spatial attention on auditory information processing.

    OpenAIRE

    Kanai, Kenichi; Ikeda, Kazuo; Tayama, Tadayuki

    2007-01-01

    This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which wa...

  12. A model of auditory nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    peripheral for the cathodic phase. This results in an average difference of 200 μs in spike latency for AP generated by anodic vs cathodic pulses. It is hypothesized here that this difference is large enough to corrupt the temporal coding in the AN. To quantify effects of pulse polarity on auditory...... as a framework to test various stimulation strategies and to quantify their effect on the performance of CI listeners in psychophysical tasks....

  13. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  14. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    of Brodmann, more intense in the contralateral (right) side. There is activation of both frontal executive areas without lateralization. Simultaneously, while area 39 of Brodmann was being activated, the temporal lobe was being inhibited. This seemingly not previously reported functional observation is suggestive that also inhibitory and not only excitatory relays play a role in the auditory pathways. The central activity in our patient (without external auditory stimuli) -who was tested while having musical hallucinations- was a mirror image of that of our normal stimulated volunteers. It is suggested that the trigger role of the inner ear -if any- could conceivably be inhibitory, desinhibitory and not necessarily purely excitatory. Based on our observations the trigger effect in our patient, could occur via the left ear. Finally, our functional studies are suggestive that auditory memory for musical perceptions could be seemingly located in the right area 39 of Brodm

  15. Heritability of non-speech auditory processing skills.

    Science.gov (United States)

    Brewer, Carmen C; Zalewski, Christopher K; King, Kelly A; Zobay, Oliver; Riley, Alison; Ferguson, Melanie A; Bird, Jonathan E; McCabe, Margaret M; Hood, Linda J; Drayna, Dennis; Griffith, Andrew J; Morell, Robert J; Friedman, Thomas B; Moore, David R

    2016-08-01

    Recent insight into the genetic bases for autism spectrum disorder, dyslexia, stuttering, and language disorders suggest that neurogenetic approaches may also reveal at least one etiology of auditory processing disorder (APD). A person with an APD typically has difficulty understanding speech in background noise despite having normal pure-tone hearing sensitivity. The estimated prevalence of APD may be as high as 10% in the pediatric population, yet the causes are unknown and have not been explored by molecular or genetic approaches. The aim of our study was to determine the heritability of frequency and temporal resolution for auditory signals and speech recognition in noise in 96 identical or fraternal twin pairs, aged 6-11 years. Measures of auditory processing (AP) of non-speech sounds included backward masking (temporal resolution), notched noise masking (spectral resolution), pure-tone frequency discrimination (temporal fine structure sensitivity), and nonsense syllable recognition in noise. We provide evidence of significant heritability, ranging from 0.32 to 0.74, for individual measures of these non-speech-based AP skills that are crucial for understanding spoken language. Identification of specific heritable AP traits such as these serve as a basis to pursue the genetic underpinnings of APD by identifying genetic variants associated with common AP disorders in children and adults. PMID:26883091

  16. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability.

    Science.gov (United States)

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-07-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  17. Auditory perspective taking.

    Science.gov (United States)

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  18. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  19. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Monica eGori; Tiziana eVercillo; Giulio eSandini; David eBurr

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  20. Spectro-temporal processing of speech – An information-theoretic framework

    DEFF Research Database (Denmark)

    Christiansen, Thomas Ulrich; Dau, Torsten; Greenberg, Steven

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up...... physiological mechanisms of binaural processing in mammals; integration of the different stimulus features into auditory scene analysis; physiological mechanisms related to the formation of auditory objects; speech perception; and limitations of auditory perception resulting from hearing disorders....

  1. Managing Auditory Risk from Acoustically Impulsive Chemical Demonstrations

    Science.gov (United States)

    Macedone, Jeffrey H.; Gee, Kent L.; Vernon, Julia A.

    2014-01-01

    Chemical demonstrations are an integral part of the process of how students construct meaning from chemical principles, but may introduce risks to students and presenters. Some demonstrations are known to be extremely loud and present auditory hazards; little has been done to assess the risks to educators and students. Using laboratory-grade…

  2. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  3. Auditory and non-auditory effects of noise on health

    OpenAIRE

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-aud...

  4. Higher dietary diversity is related to better visual and auditory sustained attention.

    Science.gov (United States)

    Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed

    2016-04-01

    Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (Phigher DDS is associated with better visual and auditory sustained attention. PMID:26902532

  5. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  6. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  7. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...

  8. Temporal Processing Capabilities in Repetition Conduction Aphasia

    Science.gov (United States)

    Sidiropoulos, Kyriakos; Ackermann, Hermann; Wannke, Michael; Hertrich, Ingo

    2010-01-01

    This study investigates the temporal resolution capacities of the central-auditory system in a subject (NP) suffering from repetition conduction aphasia. More specifically, the patient was asked to detect brief gaps between two stretches of broadband noise (gap detection task) and to evaluate the duration of two biphasic (WN-3) continuous noise…

  9. The influence of acoustic reflections from diffusive architectural surfaces on spatial auditory perception

    Science.gov (United States)

    Robinson, Philip W.

    This thesis addresses the effect of reflections from diffusive architectural surfaces on the perception of echoes and on auditory spatial resolution. Diffusive architectural surfaces play an important role in performance venue design for architectural expression and proper sound distribution. Extensive research has been devoted to the prediction and measurement of the spatial dispersion. However, previous psychoacoustic research on perception of reflections and the precedence effect has focused on specular reflections. This study compares the echo threshold of specular reflections, against those for reflections from realistic architectural surfaces, and against synthesized reflections that isolate individual qualities of reflections from diffusive surfaces, namely temporal dispersion and spectral coloration. In particular, the activation of the precedence effect, as indicated by the echo threshold is measured. Perceptual tests are conducted with direct sound, and simulated or measured reflections with varying temporal dispersion. The threshold for reflections from diffusive architectural surfaces is found to be comparable to that of a specular re ection of similar energy rather than similar amplitude. This is surprising because the amplitude of the dispersed re ection is highly attenuated, and onset cues are reduced. This effect indicates that the auditory system is integrating re ection response energy dispersed over many milliseconds into a single stream. Studies on the effect of a single diffuse reflection are then extended to a full architectural enclosure with various surface properties. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. It is found that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from at rather than diffusive surfaces. Additionally, subjective impressions are

  10. The Neurophysiology of Auditory Hallucinations – A Historic and Contemporary Review

    Directory of Open Access Journals (Sweden)

    Remko evan Lutterveld

    2011-05-01

    Full Text Available Electroencephalography (EEG and magnetoencephalography (MEG are two techniques that distinguish themselves from other neuroimaging methodologies through their ability to directly measure brain-related activity and their high temporal resolution. A large body of research has applied these techniques to study auditory hallucinations. Across a variety of approaches, the left superior temporal cortex is consistently reported to be involved in this symptom. Moreover, there is increasing evidence that a failure in corollary discharge, i.e. a neural signal originating in frontal speech areas that indicates to sensory areas that forthcoming thought is self-generated, may underlie the experience of auditory hallucinations

  11. Brainstem auditory evoked potentials in children with lead exposure

    Directory of Open Access Journals (Sweden)

    Katia de Freitas Alvarenga

    2015-02-01

    Full Text Available Introduction: Earlier studies have demonstrated an auditory effect of lead exposure in children, but information on the effects of low chronic exposures needs to be further elucidated. Objective: To investigate the effect of low chronic exposures of the auditory system in children with a history of low blood lead levels, using an auditory electrophysiological test. Methods: Contemporary cross-sectional cohort. Study participants underwent tympanometry, pure tone and speech audiometry, transient evoked otoacoustic emissions, and brainstem auditory evoked potentials, with blood lead monitoring over a period of 35.5 months. The study included 130 children, with ages ranging from 18 months to 14 years, 5 months (mean age 6 years, 8 months ± 3 years, 2 months. Results: The mean time-integrated cumulative blood lead index was 12 µg/dL (SD ± 5.7, range:2.433. All participants had hearing thresholds equal to or below 20 dBHL and normal amplitudes of transient evoked otoacoustic emissions. No association was found between the absolute latencies of waves I, III, and V, the interpeak latencies I-III, III-V, and I-V, and the cumulative lead values. Conclusion: No evidence of toxic effects from chronic low lead exposures was observed on the auditory function of children living in a lead contaminated area.

  12. The critical role of Golgi cells in regulating spatio-temporal integration and plasticity at the cerebellum input stage

    Directory of Open Access Journals (Sweden)

    2008-07-01

    Full Text Available After the discovery at the end of the 19th century (Golgi, 1883, the Golgi cell was precisely described by S.R. y Cajal (see Cajal, 1987, 1995 and functionally identified as an inhibitory interneuron 50 years later by J.C. Eccles and colleagues (Eccles e al., 1967. Then, its role has been casted by Marr (1969 within the Motor Learning Theory as a codon size regulator of granule cell activity. It was immediately clear that Golgi cells had to play a critical role, since they are the main inhibitory interneuron of the granular layer and control activity of as many as 100 millions granule cells. In vitro, Golgi cells show pacemaking, resonance, phase-reset and rebound-excitation in the theta-frequency band. These properties are likely to impact on their activity in vivo, which shows irregular spontaneous beating modulated by sensory inputs and burst responses to punctuate stimulation followed by a silent pause. Moreover, investigations have given insight into Golgi cells connectivity within the cerebellar network and on their impact on the spatio-temporal organization of activity. It turns out that Golgi cells can control both the temporal dynamics and the spatial distribution of information transmitted through the cerebellar network. Moreover, Golgi cells regulate the induction of long-term synaptic plasticity at the mossy fiber - granule cell synapse. Thus, the concept is emerging that Golgi cells are of critical importance for regulating granular layer network activity bearing important consequences for cerebellar computation as a whole.

  13. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    Science.gov (United States)

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  14. Implications of different spatial (and temporal) resolutions for integrated assessment modelling on the regional to local scale – nesting, coupling, or model integration?

    OpenAIRE

    Reis, S.; Sabel, C.; Oxley, T.

    2009-01-01

    Integrated assessment modelling (IAM) in general is currently applied to a range of environmental problems addressing aspects of air pollution and climate change, water pollution and many more. While different branches have emerged from applications within different disciplines, they share a similar view of the core features of IAM, i.e. multi-disciplinary approaches, integration across environmental compartments, and the application of models with the aim to provide decision support for comp...

  15. Investigation of spatial resolution and temporal performance of SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout) with integrated electrostatic focusing

    Science.gov (United States)

    Scaduto, David A.; Lubinsky, Anthony R.; Rowlands, John A.; Kenmotsu, Hidenori; Nishimoto, Norihito; Nishino, Takeshi; Tanioka, Kenkichi; Zhao, Wei

    2014-03-01

    We have previously proposed SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout), a novel detector concept with potentially superior spatial resolution and low-dose performance compared with existing flat-panel imagers. The detector comprises a scintillator that is optically coupled to an amorphous selenium photoconductor operated with avalanche gain, known as high-gain avalanche rushing photoconductor (HARP). High resolution electron beam readout is achieved using a field emitter array (FEA). This combination of avalanche gain, allowing for very low-dose imaging, and electron emitter readout, providing high spatial resolution, offers potentially superior image quality compared with existing flat-panel imagers, with specific applications to fluoroscopy and breast imaging. Through the present collaboration, a prototype HARP sensor with integrated electrostatic focusing and nano- Spindt FEA readout technology has been fabricated. The integrated electron-optic focusing approach is more suitable for fabricating large-area detectors. We investigate the dependence of spatial resolution on sensor structure and operating conditions, and compare the performance of electrostatic focusing with previous technologies. Our results show a clear dependence of spatial resolution on electrostatic focusing potential, with performance approaching that of the previous design with external mesh-electrode. Further, temporal performance (lag) of the detector is evaluated and the results show that the integrated electrostatic focusing design exhibits comparable or better performance compared with the mesh-electrode design. This study represents the first technical evaluation and characterization of the SAPHIRE concept with integrated electrostatic focusing.

  16. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    OpenAIRE

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading v...

  17. Enhanced spontaneous functional connectivity of the superior temporal gyrus in early deafness

    OpenAIRE

    Hao Ding; Dong Ming; Baikun Wan; Qiang Li; Wen Qin; Chunshui Yu

    2016-01-01

    Early auditory deprivation may drive the auditory cortex into cross-modal processing of non-auditory sensory information. In a recent study, we had shown that early deaf subjects exhibited increased activation in the superior temporal gyrus (STG) bilaterally during visual spatial working memory; however, the changes in the organization of the STG related spontaneous functional network, and their cognitive relevance are still unknown. To clarify this issue, we applied resting state functional ...

  18. Objective Audio Quality Assessment Based on Spectro-Temporal Modulation Analysis

    OpenAIRE

    Guo, Ziyuan

    2011-01-01

    Objective audio quality assessment is an interdisciplinary research area that incorporates audiology and machine learning. Although much work has been made on the machine learning aspect, the audiology aspect also deserves investigation. This thesis proposes a non-intrusive audio quality assessment algorithm, which is based on an auditory model that simulates human auditory system. The auditory model is based on spectro-temporal modulation analysis of spectrogram, which has been proven to be ...

  19. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis.

    Science.gov (United States)

    Fletcher, Phillip D; Downey, Laura E; Golden, Hannah L; Clark, Camilla N; Slattery, Catherine F; Paterson, Ross W; Schott, Jonathan M; Rohrer, Jonathan D; Rossor, Martin N; Warren, Jason D

    2015-06-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music ('musicophilia') occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717

  20. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  1. Auditory processing abilities in children with chronic otitis media with effusion.

    Science.gov (United States)

    Khavarghazalani, Bahare; Farahani, Farhad; Emadi, Maryam; Hosseni Dastgerdi, Zahra

    2016-05-01

    Conclusion The study results indicate that children with a history of otitis media with effusion (OME) suffer from auditory processing disorder to some degree. The findings support the hypothesis that fluctuating hearing loss may affect central auditory processing during critical periods. Objectives Evidence suggests that prolonged OME in children can result in an auditory processing disorder, presumably because hearing has been disrupted during an important developmental period. A lack of auditory stimulation leads to the abnormal development of the hearing pathways in the brain. The aim of the present study was to determine the effects of OME on binaural auditory function and auditory temporal processing. Method In the present study, the dichotic digit test (DDT) was used for binaural hearing, and the gap in noise (GIN) test was used to evaluate temporal hearing processing. Results The average values of GIN differed significantly between children with a history of OME and normal controls (p < 0.001). The mean values of the DDT score were significantly different between the two groups (p = 0.002). PMID:26881324

  2. Spectral and temporal properties of the supergiant fast X-ray transient IGR J18483-0311 observed by INTEGRAL

    CERN Document Server

    Ducci, L; Sasaki, M; Santangelo, A; Esposito, P; Romano, P; Vercellone, S

    2013-01-01

    IGR J18483-0311 is a supergiant fast X-ray transient whose compact object is located in a wide (18.5 d) and eccentric (e~0.4) orbit, which shows sporadic outbursts that reach X-ray luminosities of ~1e36 erg/s. We investigated the timing properties of IGR J18483-0311 and studied the spectra during bright outbursts by fitting physical models based on thermal and bulk Comptonization processes for accreting compact objects. We analysed archival INTEGRAL data collected in the period 2003-2010, focusing on the observations with IGR J18483-0311 in outburst. We searched for pulsations in the INTEGRAL light curves of each outburst. We took advantage of the broadband observing capability of INTEGRAL for the spectral analysis. We observed 15 outbursts, seven of which we report here for the first time. This data analysis almost doubles the statistics of flares of this binary system detected by INTEGRAL. A refined timing analysis did not reveal a significant periodicity in the INTEGRAL observation where a ~21s pulsation w...

  3. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Directory of Open Access Journals (Sweden)

    Jean-Luc Schwartz

    2014-07-01

    Full Text Available An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  4. Interaural cross correlation of event-related potentials and diffusion tensor imaging in the evaluation of auditory processing disorder: a case study.

    Science.gov (United States)

    Jerger, James; Martin, Jeffrey; McColl, Roderick

    2004-01-01

    In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103

  5. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  6. The cat's meow: A high-field fMRI assessment of cortical activity in response to vocalizations and complex auditory stimuli.

    Science.gov (United States)

    Hall, Amee J; Butler, Blake E; Lomber, Stephen G

    2016-02-15

    Sensory systems are typically constructed in a hierarchical fashion such that lower level subcortical and cortical areas process basic stimulus features, while higher level areas reassemble these features into object-level representations. A number of anatomical pathway tracing studies have suggested that the auditory cortical hierarchy of the cat extends from a core region, consisting of the primary auditory cortex (A1) and the anterior auditory field (AAF), to higher level auditory fields that are located ventrally. Unfortunately, limitations on electrophysiological examination of these higher level fields have resulted in an incomplete understanding of the functional organization of the auditory cortex. Thus, the current study uses functional MRI in conjunction with a variety of simple and complex auditory stimuli to provide the first comprehensive examination of function across the entire cortical hierarchy. Auditory cortex function is shown to be largely lateralized to the left hemisphere, and is concentrated bilaterally in fields surrounding the posterior ectosylvian sulcus. The use of narrowband noise stimuli enables the visualization of tonotopic gradients in the posterior auditory field (PAF) and ventral posterior auditory field (VPAF) that have previously been unverifiable using fMRI and pure tones. Furthermore, auditory fields that are inaccessible to more invasive techniques, such as the insular (IN) and temporal (T) cortices, are shown to be selectively responsive to vocalizations. Collectively, these data provide a much needed functional correlate for anatomical examinations of the hierarchy of cortical structures within the cat auditory cortex. PMID:26658927

  7. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  8. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia.

    Science.gov (United States)

    Dale, Corby L; Brown, Ethan G; Fisher, Melissa; Herman, Alexander B; Dowling, Anne F; Hinkley, Leighton B; Subramaniam, Karuna; Nagarajan, Srikantan S; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  9. A spatio-temporal reconstruction of sea surface temperature during Dansgaard-Oeschger events from model-data integration

    Science.gov (United States)

    Jensen, Mari F.; Nummelin, Aleksi; Borg Nielsen, Søren; Sadatzki, Henrik; Sessford, Evangeline; Kleppin, Hannah; Risebrobakken, Bjørg; Born, Andreas

    2016-04-01

    Proxy data suggests a large variability in the North Atlantic sea surface temperature (SST) and sea ice cover during the Dansgaard Oeschger (DO) events of the last glacial. However, the mechanisms behind these changes are still debated. It is not clear whether the ocean temperatures are controlled by forced changes in the northward ocean heat transport or by local surface fluxes, or if, instead, the SST changes can be explained by internal variability. We address these questions by analyzing a full DO event using proxy-surrogate reconstructions. This method provides a means to extrapolate the temporally accurate information from scarce proxy reconstructions with the spatial and physical consistency of climate models. Model simulations are treated as a pool of possible ocean states from which the closest match to the reconstructions, e.g., one model year, is selected based on an objective cost function. The original chronology of the model is replaced by that of the proxy data. Repeating this algorithm for each proxy time step yields a comprehensive four-dimensional dataset that is consistent with reconstructed data. In addition, the solution also includes variables and locations for which no reconstructions exist. We show that by only using climate model data from the preindustrial control simulations, we are able to reconstruct the SST variability in the subpolar gyre region over the DO event. In the eastern Nordic Seas, on the other hand, we lack the amplitude of the variations while capturing the temporal pattern. Based on our analysis, we suggest that the variability of the subpolar gyre during the analyzed DO event can be explained by internal variability of the climate system alone. Further research is needed to explain whether the lacking amplitude in the Nordic Seas is due to the model deficiencies or if external forcing or some feedback mechanisms could give rise to larger SST variability.

  10. Multimodal emotion perception after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Valérie eMilesi

    2014-05-01

    Full Text Available In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion.

  11. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  12. Posterior internal auditory canal closure following the retrosigmoid approach to the cerebellopontine angle.

    Science.gov (United States)

    Leonetti, J P; Anderson, D E; Newell, D J; Smith, P G

    1993-01-01

    The retrosigmoid approach is utilized in a variety of cerebellopontine angle and internal auditory canal procedures. Drill curettage of the posterior internal auditory canal enhances lateral exposure, however, this step may also increase the patient's risk for postoperative cerebrospinal fluid (CSF) otorrhea. Obliteration of perilabyrinthine air cells is technically difficult and muscle graft displacement frequently occurs. A technique for posterior petrous dural flap stabilization of a temporalis muscle plug has proved successful in decreasing the risk of postoperative CSF fistula following retrosigmoid surgery. Temporal bone air-cell anatomy, as it relates to retrosigmoid, posterior internal auditory canal surgery is reviewed. Our technique for internal auditory canal closure, with bone wax, bone paté, muscle grafts, and petrous ridge dural flaps is outlined. PMID:8424473

  13. Different levels of Ih determine distinct temporal integration in bursting and regular-spiking neurons in rat subiculum.

    NARCIS (Netherlands)

    I. van Welie; M.W.H. Remme; J.A. van Hooft; W.J. Wadman

    2006-01-01

    Pyramidal neurons in the subiculum typically display either bursting or regular-spiking behaviour. Although this classification into two neuronal classes is well described, it is unknown how these two classes of neurons contribute to the integration of input to the subiculum. Here, we report that bu

  14. Hearing Mechanisms and Noise Metrics Related to Auditory Masking in Bottlenose Dolphins (Tursiops truncatus).

    Science.gov (United States)

    Branstetter, Brian K; Bakhtiari, Kimberly L; Trickey, Jennifer S; Finneran, James J

    2016-01-01

    Odontocete cetaceans are acoustic specialists that depend on sound to hunt, forage, navigate, detect predators, and communicate. Auditory masking from natural and anthropogenic sound sources may adversely affect these fitness-related capabilities. The ability to detect a tone in a broad range of natural, anthropogenic, and synthesized noise was tested with bottlenose dolphins using a psychophysical, band-widening procedure. Diverging masking patterns were found for noise bandwidths greater than the width of an auditory filter. Despite different noise types having equal-pressure spectral-density levels (95 dB re 1 μPa(2)/Hz), masked detection threshold differences were as large as 22 dB. Consecutive experiments indicated that noise types with increased levels of amplitude modulation resulted in comodulation masking release due to within-channel and across-channel auditory mechanisms. The degree to which noise types were comodulated (comodulation index) was assessed by calculating the magnitude-squared coherence between the temporal envelope from an auditory filter centered on the signal and temporal envelopes from flanking filters. Statistical models indicate that masked thresholds in a variety of noise types, at a variety of levels, can be explained with metrics related to the comodulation index in addition to the pressure spectral-density level of noise. This study suggests that predicting auditory masking from ocean noise sources depends on both spectral and temporal properties of the noise. PMID:26610950

  15. Auditory and non-auditory effects of noise on health.

    Science.gov (United States)

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-04-12

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  16. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  17. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  18. Neural Correlates of an Auditory Afterimage in Primary Auditory Cortex

    OpenAIRE

    Noreña, A. J.; Eggermont, J. J.

    2003-01-01

    The Zwicker tone (ZT) is defined as an auditory negative afterimage, perceived after the presentation of an appropriate inducer. Typically, a notched noise (NN) with a notch width of 1/2 octave induces a ZT with a pitch falling in the frequency range of the notch. The aim of the present study was to find potential neural correlates of the ZT in the primary auditory cortex of ketamine-anesthetized cats. Responses of multiunits were recorded simultaneously with two 8-electrode arrays during 1 s...

  19. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  20. Monitoring Tree Population Dynamics in Arid Zone Through Multiple Temporal Scales: Integration of Spatial Analysis, Change Detection and Field Long Term Monitoring

    Science.gov (United States)

    Isaacson, S.; Rachmilevitch, S.; Ephrath, J. E.; Maman, S.; Blumberg, D. G.

    2016-06-01

    High mortality rates and lack of recruitment in the acacia populations throughout the Negev Desert and the Arava rift valley of Israel have been reported in previous studies. However, it is difficult to determine whether these reports can be evidence to a significant decline trend of the trees populations. This is because of the slow dynamic processes of acaia tree populations and the lack of long term continuous monitoring data. We suggest a new data analysis technique that expands the time scope of the field long term monitoring of trees in arid environments. This will enables us to improve our understanding of the spatial and temporal changes of these populations. We implemented two different approaches in order to expand the time scope of the acacia population field survey: (1) individual based tree change detection using Corona satellite images and (2) spatial analysis of trees population, converting spatial data into temporal data. The next step was to integrate the results of the two analysis techniques (change detection and spatial analysis) with field monitoring. This technique can be implemented to other tree populations in arid environments to help assess the vegetation conditions and dynamics of those ecosystems.

  1. Temporal Succession of Phytoplankton Assemblages in a Tidal Creek System of the Sundarbans Mangroves: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Dola Bhattacharjee

    2013-01-01

    Full Text Available Sundarbans, the world's largest mangrove ecosystem, is unique and biologically diverse. A study was undertaken to track temporal succession of phytoplankton assemblages at the generic level (≥10 µm encompassing 31 weeks of sampling (June 2010–May 2011 in Sundarbans based on microscopy and hydrological measurements. As part of this study, amplification and sequencing of type ID rbcL subunit of RuBisCO enzyme were also applied to infer chromophytic algal groups (≤10 µm size from one of the study points. We report the presence of 43 genera of Bacillariophyta, in addition to other phytoplankton groups, based on microscopy. Phytoplankton cell abundance, which was highest in winter and spring, ranged between 300 and 27,500 cells/L during this study. Cell biovolume varied between winter of 2010 (90–35281.04 µm3 and spring-summer of 2011 (52–33962.24 µm3. Winter supported large chain forming diatoms, while spring supported small sized diatoms, followed by other algal groups in summer. The clone library approach showed dominance of Bacillariophyta-like sequences, in addition to Cryptophyta-, Haptophyta-, Pelagophyta-, and Eustigmatophyta-like sequences which were detected for the first time highlighting their importance in mangrove ecosystem. This study clearly shows that a combination of microscopy and molecular tools can improve understanding of phytoplankton assemblages in mangrove environments.

  2. An integrated approach for spatio-temporal variability analysis of wetlands: a case study of Abaya and Chamo lakes, Ethiopia.

    Science.gov (United States)

    Tibebu Kassawmar, N; Ram Mohan Rao, K; Lemlem Abraha, G

    2011-09-01

    Starting with the intensification of irrigation activities in the beginning of 1980s in Abaya and Chamo lakes area, the decreasing water inflow to the lakes caused denudation of the wetlands. The ecological situation in the lake region changed significantly during last four decades. The lakes and associated wetlands change have been studied using Landsat MSS (1973), Landsat TM (1986), and Ladsat ETM (2000) satellite imagery. Along with satellite imagery, other hydro-meteorological data were collected and hydro-meteorological data analyses were done to assess the variability of wetlands. From these data, lakes morphometric property estimation at different time series and water balance analysis for both lakes were done. Wetlands are mapped from the TCT image and these maps are subject to change detection to see the temporal and spatial variability of the wetlands. Moreover, the lake-morphometric area and volume variation have been studied. The result showed that between 1986 and 2000, a significant reduction has been observed but lesser than the previous decades (6.4 km(2)). The identified reason behind this change is that the free settlement and shoreline cultivation of the wetlands causing the soil erosion and eventually adds the sediment to the wetlands. PMID:21108000

  3. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  4. Adaptation in the auditory system: an overview

    OpenAIRE

    David ePérez-González; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...

  5. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  6. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  7. Odors Bias Time Perception in Visual and Auditory Modalities

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  8. Auditory-visual interaction: from fundamental research in cognitive psychology to (possible) applications

    Science.gov (United States)

    Kohlrausch, Armin; van de Par, Steven

    1999-05-01

    In our natural environment, we simultaneously receive information through various sensory modalities. The properties of these stimuli are coupled by physical laws, so that, e.g., auditory and visual stimuli caused by the same even have a fixed temporal relation when reaching the observer. In speech, for example, visible lip movements and audible utterances occur in close synchrony which contributes to the improvement of speech intelligibility under adverse acoustic conditions. Research into multi- sensory perception is currently being performed in a great variety of experimental contexts. This paper attempts to give an overview of the typical research areas dealing with audio-visual interaction and integration, bridging the range from cognitive psychology to applied research for multimedia applications. Issues of interest are the sensitivity to asynchrony between audio and video signals, the interaction between audio-visual stimuli with discrepant spatial and temporal rate information, crossmodal effects in attention, audio-visual interactions in speech perception and the combined perceived quality of audio-visual stimuli.

  9. Masking and scrambling in the auditory thalamus of awake rats by Gaussian and modulated noises.

    Science.gov (United States)

    Martin, Eugene M; West, Morris F; Bedenbaugh, Purvis H

    2004-10-12

    This paper provides a look at how modulated broad-band noises modulate the thalamic response evoked by brief probe sounds in the awake animal. We demonstrate that noise not only attenuates the response to probe sounds (masking) but also changes the temporal response pattern (scrambling). Two brief probe sounds, a Gaussian noise burst and a brief sinusoidal tone, were presented in silence and in three ongoing noises. The three noises were targeted at activating the auditory system in qualitatively distinct ways. Dynamic ripple noise, containing many random tone-like elements, is targeted at those parts of the auditory system that respond well to tones. International Collegium of Rehabilitative Audiology noise, comprised of the sum of several simultaneous streams of Schroeder-phase speech, is targeted at those parts of the auditory system that respond well to modulated sounds but lack a well defined response to tones. Gaussian noise is targeted at those parts of the auditory system that respond to acoustic energy regardless of modulation. All noises both attenuated and decreased the precise temporal repeatability of the onset response to probe sounds. In addition, the modulated noises induced context-specific changes in the temporal pattern of the response to probe sounds. Scrambling of the temporal response pattern may be a direct neural correlate of the unfortunate experience of being able to hear, but not understand, speech sounds in noisy environments. PMID:15452349

  10. Vocal Stereotypy in Children with Autism: Structural Characteristics, Variability, and Effects of Auditory Stimulation

    Science.gov (United States)

    Lanovaz, Marc J.; Sladeczek, Ingrid E.

    2011-01-01

    Two experiments were conducted to examine (a) the relationship between the structural characteristics (i.e., bout duration, inter-response time [IRT], pitch, and energy) and overall duration of vocal stereotypy, and (b) the effects of auditory stimulation on the duration and temporal structure of the behavior. In the first experiment, we measured…

  11. Auditory Frequency Discrimination in Children with Specific Language Impairment: A Longitudinal Study

    Science.gov (United States)

    Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.

    2005-01-01

    It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…

  12. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    Directory of Open Access Journals (Sweden)

    Lena-Vanessa eDollezal

    2014-06-01

    Full Text Available Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI. SAM tone parameters were chosen to evoke an integrated (1-stream, a segregated (2-stream or an ambiguous percept by adjusting the fmod difference between A and B tones (∆fmod. The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on ∆fmod between A and B SAM tones. The effect of ∆fmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of

  13. Speed on the dance floor: Auditory and visual cues for musical tempo.

    Science.gov (United States)

    London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri

    2016-02-01

    Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and

  14. Gamma-ray bursts observed by the INTEGRAL-SPI anticoincidence shield: A study of individual pulses and temporal variability

    DEFF Research Database (Denmark)

    Ryde, F.; Borgonovo, L.; Larsson, S.;

    2003-01-01

    We study a set of 28 GRB light-curves detected between 15 December 2002 and 9 June 2003 by the anti-coincidence shield of the spectrometer (SPI) of INTEGRAL. During this period it has detected 50 bursts, that have been confirmed by other instruments, with a time resolution of 50 ms. First, we...... power-law with index of 1.60+/-0.05 and a break between 1-2 Hz. Fourth, we also discuss the background and noise levels. We found that the background noise has a Gaussian distribution and its power is independent of frequency, i.e., it is white noise. However, it does not follow a Poisson statistic...

  15. Auditory habituation to simple tones: reduced evidence for habituation in children compared to adults

    OpenAIRE

    Jana Muenssinger; Gerhard Binder; Stefan Ehehalt

    2013-01-01

    Habituation—the response decrement to repetitively presented stimulation—is a basic cognitive capability and suited to investigate development and integrity of the human brain. To evaluate the developmental process of auditory habituation, the current study used magnetoencephalography (MEG) to investigate auditory habituation, dishabituation and stimulus specificity in children and adults and compared the results between age groups. Twenty-nine children (M age = 9.69 years, SD ± 0.47) and 14 ...

  16. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  17. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  18. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  19. Development of sensitivity to audiovisual temporal asynchrony during midchildhood.

    Science.gov (United States)

    Kaganovich, Natalya

    2016-02-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7- to 8-year-olds, 10- to 11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether nonverbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2-kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs): 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition), and in the other half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of reaction time (RT) at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10- to 11-year-olds outperforming 7- to 8-year-olds at the 300- to 500-ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function-such as autism, specific language impairment, and dyslexia-may be compared. PMID:26569563

  20. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    Science.gov (United States)

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  1. Variability and information content in auditory cortex spike trains during an interval-discrimination task.

    Science.gov (United States)

    Abolafia, Juan M; Martinez-Garcia, M; Deco, G; Sanchez-Vives, M V

    2013-11-01

    Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task. PMID:23945780

  2. Animal models of spontaneous activity in the healthy and impaired auditory system

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    2015-04-01

    Full Text Available Spontaneous neural activity in the auditory nerve fibers and in auditory cortex in healthy animals is discussed with respect to the question: Is spontaneous activity noise or information carrier? The studies reviewed suggest strongly that spontaneous activity is a carrier of information. Subsequently, I review the numerous findings in the impaired auditory system, particularly with reference to noise trauma and tinnitus. Here the common assumption is that tinnitus reflects increased noise in the auditory system that among others affects temporal processing and interferes with the gap-startle reflex, which is frequently used as a behavioral assay for tinnitus. It is, however, more likely that the increased spontaneous activity in tinnitus, firing rate as well as neural synchrony, carries information that shapes the activity of downstream structures, including non-auditory ones, and leading to the tinnitus percept. The main drivers of that process are bursting and synchronous firing, which facilitates transfer of activity across synapses, and allows formation of auditory objects, such as tinnitus

  3. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  4. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  5. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure.

    Science.gov (United States)

    Stacey, Paula C; Kitterick, Pádraig T; Morris, Saffron D; Sumner, Christian J

    2016-06-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  6. Auditory, Somatosensory, and Multisensory Insular Cortex in the Rat

    OpenAIRE

    Rodgers, Krista M.; Benison, Alexander M.; Klein, Andrea; Barth, Daniel S.

    2008-01-01

    Compared with other areas of the forebrain, the function of insular cortex is poorly understood. This study examined the unisensory and multisensory function of the rat insula using high-resolution, whole-hemisphere, epipial evoked potential mapping. We found the posterior insula to contain distinct auditory and somatotopically organized somatosensory fields with an interposed and overlapping region capable of integrating these sensory modalities. Unisensory and multisensory responses were un...

  7. Task-irrelevant auditory feedback facilitates motor performance in musicians

    OpenAIRE

    VirginiaConde; EckartAltenmüller; ArnoVillringer

    2012-01-01

    An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in mus...

  8. Sistema auditivo eferente: efeito no processamento auditivo Efferent auditory system: its effect on auditory processing

    Directory of Open Access Journals (Sweden)

    Fernanda Acaui Ribeiro Burguetti

    2008-10-01

    Full Text Available O processamento da informação sonora depende da integridade das vias auditivas aferentes e eferentes. O sistema auditivo eferente pode ser avaliado por meio dos reflexos acústicos e da supressão das emissões otoacústicas. OBJETIVO: Verificar a atividade do sistema auditivo eferente, por meio da supressão das emissões otoacústicas (EOA e da sensibilização do reflexo acústico no distúrbio de processamento auditivo. CASUÍSTICA E MÉTODO: Estudo prospectivo: 50 crianças com alteração de processamento auditivo (grupo estudo e 38 sem esta alteração (grupo controle, avaliadas por meio das EOA na ausência e presença de ruído contralateral e da pesquisa dos limiares do reflexo acústico na ausência e presença de estímulo facilitador contralateral. RESULTADOS: O valor médio da supressão das EOA foi de até 1,50 dB para o grupo controle e de até 1,26 dB para o grupo estudo. O valor médio da sensibilização dos reflexos foi de até 14,60 dB para o grupo estudo e de até 15,21 dB para o grupo controle. Não houve diferença estatisticamente significante entre as respostas dos grupos controle e estudo em ambos os procedimentos. CONCLUSÃO: O grupo estudo apresentou valores reduzidos na supressão das EOA e valores aumentados na sensibilização do reflexo acústico, em relação ao grupo controle.Auditory processing depends on afferent and efferent auditory pathways integrity. The efferent auditory system may be assessed in humans by two non-invasive and objective methods: acoustic reflex and otoacoustic emissions suppression. AIM: Analyze the efferent auditory system activity by otoacoustic emission suppression and acoustic reflex sensitization in human subjects with auditory processing disorders. METHOD: Prospective study: fifty children with auditory processing disorders (study group and thirty-eight children without auditory processing disorders (control group were evaluated using otoacoustic emission with and without

  9. Coding of auditory space

    OpenAIRE

    Konishi­, Masakazu

    2003-01-01

    Behavioral, anatomical, and physiological approaches can be integrated in the study of sound localization in barn owls. Space representation in owls provides a useful example for discussion of place and ensemble coding. Selectivity for space is broad and ambiguous in low-order neurons. Parallel pathways for binaural cues and for different frequency bands converge on high-order space-specific neurons, which encode space more precisely. An ensemble of broadly tuned place-coding neurons may conv...

  10. Temporal scaling of neural responses to compressed and dilated natural speech.

    Science.gov (United States)

    Lerner, Y; Honey, C J; Katkov, M; Hasson, U

    2014-06-15

    Different brain areas integrate information over different timescales, and this capacity to accumulate information increases from early sensory areas to higher order perceptual and cognitive areas. It is currently unknown whether the timescale capacity of each brain area is fixed or whether it adaptively rescales depending on the rate at which information arrives from the world. Here, using functional MRI, we measured brain responses to an auditory narrative presented at different rates. We asked whether neural responses to slowed (speeded) versions of the narrative could be compressed (stretched) to match neural responses to the original narrative. Temporal rescaling was observed in early auditory regions (which accumulate information over short timescales) as well as linguistic and extra-linguistic brain areas (which can accumulate information over long timescales). The temporal rescaling phenomenon started to break down for stimuli presented at double speed, and intelligibility was also impaired for these stimuli. These data suggest that 1) the rate of neural information processing can be rescaled according to the rate of incoming information, both in early sensory regions as well as in higher order cortexes, and 2) the rescaling of neural dynamics is confined to a range of rates that match the range of behavioral performance. PMID:24647432

  11. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    Directory of Open Access Journals (Sweden)

    Ranganadh Narayanam

    2015-10-01

    Full Text Available The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the residual noise and improve the intelligibility of speech a psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise. This is a generalized time frequency subtraction algorithm which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. To calculate the bark spreading energy and temporal spreading energy the wavelet coefficients are used from which a time frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the discussed method. To increase the intelligibility of speech an unvoiced speech enhancement algorithm also integrated into the system.

  12. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration. PMID:25848682

  13. Both the middle temporal gyrus and the ventral anterior temporal area are crucial for multimodal semantic processing: distortion-corrected fMRI evidence for a double gradient of information convergence in the temporal lobes.

    Science.gov (United States)

    Visser, Maya; Jefferies, Elizabeth; Embleton, Karl V; Lambon Ralph, Matthew A

    2012-08-01

    Most contemporary theories of semantic memory assume that concepts are formed from the distillation of information arising in distinct sensory and verbal modalities. The neural basis of this distillation or convergence of information was the focus of this study. Specifically, we explored two commonly posed hypotheses: (a) that the human middle temporal gyrus (MTG) provides a crucial semantic interface given the fact that it interposes auditory and visual processing streams and (b) that the anterior temporal region-especially its ventral surface (vATL)-provides a critical region for the multimodal integration of information. By utilizing distortion-corrected fMRI and an established semantic association assessment (commonly used in neuropsychological investigations), we compared the activation patterns observed for both the verbal and nonverbal versions of the same task. The results are consistent with the two hypotheses simultaneously: Both MTG and vATL are activated in common for word and picture semantic processing. Additional planned, ROI analyses show that this result follows from two principal axes of convergence in the temporal lobe: both lateral (toward MTG) and longitudinal (toward the anterior temporal lobe). PMID:22621260

  14. Temporal variations and spectral properties of the Be/X-ray pulsar GRO J1008—57 studied by INTEGRAL

    International Nuclear Information System (INIS)

    The spin period variations and hard X-ray spectral properties of the Be/X-ray pulsar GRO J1008—57 are studied with INTEGRAL observations during two outbursts in 2004 June and 2009 March. The pulsation periods of ∼ 93.66 s in 2004 and ∼ 93.73 s in 2009 are determined. Pulse profiles of GRO J1008—57 during outbursts are strongly energy dependent with a double-peaked profile from 3–7 keV and a single-peaked profile in hard X-rays above 7 keV. Combined with previous measurements, we find that GRO J1008—57 has undergone a spin-down trend from 1993–2009 with a rate of ∼ 4.1 × 10−5 s d−1, and could have changed into a spin-up trend after 2009. We find a relatively soft spectrum in the early phase of the 2009 outburst with cutoff energy ∼ 13 keV. Above a hard X-ray flux of ∼ 10−9 erg cm−2 s−1, the spectra of GRO J1008—57 during outbursts need an enhanced hydrogen absorption with column density ∼ 6 × 1022 cm−2. The observed dip-like pulse profile of GRO J1008—57 in soft X-ray bands could be caused by this intrinsic absorption. Around the outburst peaks, a possible cyclotron resonance scattering feature at ∼ 74 keV is detected in the spectra of GRO J1008—57 which is consistent with the feature that was reported in MAXI/GSC observations, making the source a neutron star with the highest known magnetic field (∼ 6.6 × 1012 G) among accreting X-ray pulsars. This marginal feature is supported by the present detections in GRO J1008—57 following the correlation between the fundamental line energies and cutoff energies in accreting X-ray pulsars. Finally we discovered two modulation periods at ∼ 124.38 d and ∼ 248.78 d using RXTE/ASM light curves of GRO J1008—57. Two flare peaks appearing in the folded light curve had different spectral properties. The normal outburst lasting 0.1 of an orbital phase had a hard spectrum and could not be significantly detected below 3 keV. The second flare lasting ten days showed a very soft

  15. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David Pérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  16. Musical expertise induces audiovisual integration of abstract congruency rules.

    Science.gov (United States)

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo

    2012-12-12

    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to non-musicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses. PMID:23238733

  17. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    Science.gov (United States)

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703

  18. Assessment of auditory cortical function in cochlear implant patients using 15O PET

    International Nuclear Information System (INIS)

    Full text: Cochlear implantation has been an extraordinarily successful method of restoring hearing and the potential for full language development in pre-lingually and post-lingually deaf individuals (Gibson 1996). Post-lingually deaf patients, who develop their hearing loss later in life, respond best to cochlear implantation within the first few years of their deafness, but are less responsive to implantation after several years of deafness (Gibson 1996). In pre-lingually deaf children, cochlear implantation is most effect in allowing the full development language skills when performed within a critical period, in the first 8 years of life. These clinical observations suggest considerable neural plasticity of the human auditory cortex in acquiring and retaining language skills (Gibson 1996, Buchwald 1990). Currently, electrocochleography is used to determine the integrity of the auditory pathways to the auditory cortex. However, the functional integrity of the auditory cortex cannot be determined by this method. We have defined the extent of activation of the auditory cortex and auditory association cortex in 6 normal controls and 6 cochlear implant patients using 15O PET functional brain imaging methods. Preliminary results have indicated the potential clinical utility of 15O PET cortical mapping in the pre-surgical assessment and post-surgical follow up of cochlear implant patients. Copyright (1998) Australian Neuroscience Society

  19. Mapping tropical forests and deciduous rubber plantations in Hainan Island, China by integrating PALSAR 25-m and multi-temporal Landsat images

    Science.gov (United States)

    Chen, Bangqian; Li, Xiangping; Xiao, Xiangming; Zhao, Bin; Dong, Jinwei; Kou, Weili; Qin, Yuanwei; Yang, Chuan; Wu, Zhixiang; Sun, Rui; Lan, Guoyu; Xie, Guishui

    2016-08-01

    Updated and accurate maps of tropical forests and industrial plantations, like rubber plantations, are essential for understanding carbon cycle and optimal forest management practices, but existing optical-imagery-based efforts are greatly limited by frequent cloud cover. Here we explored the potential utility of integrating 25-m cloud-free Phased Array type L-band Synthetic Aperture Radar (PALSAR) mosaic product and multi-temporal Landsat images to map forests and rubber plantations in Hainan Island, China. Based on structure information detected by PALSAR and yearly maximum Normalized Difference Vegetation Index (NDVI), we first identified and mapped forests with a producer accuracy (PA) of 96% and user accuracy (UA) of 98%. The resultant forest map showed reasonable spatial and areal agreements with the optical-based forest maps of Fine Resolution Observation and Monitoring Global Land Clover (FROM-GLC) and GlobalLand30. We then extracted rubber plantations from the forest map according to their deciduous features (using minimum Land Surface Water Index, LSWI) and rapid changes in canopies during Rubber Defoliation and Foliation (RDF) period (using standard deviation of LSWI) and dense canopy in growing season (using maximum NDVI). The rubber plantation map yielded a high accuracy when validated by ground truth-based data (PA/UA > 86%) and evaluated with three farm-scale rubber plantation maps (PA/UA >88%). It is estimated that in 2010, Hainan Island had 2.11 × 106 ha of forest and 5.15 × 105 ha of rubber plantations. This study has demonstrated the potential of integrating 25-m PALSAR-based structure information, and Landsat-based spectral and phenology information for mapping tropical forests and rubber plantations.

  20. Psychophysiological responses to auditory change.

    Science.gov (United States)

    Chuen, Lorraine; Sears, David; McAdams, Stephen

    2016-06-01

    A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. PMID:26927928

  1. Auditory distraction and serial memory

    OpenAIRE

    Jones, D M; Hughes, Rob; Macken, W.J.

    2010-01-01

    One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the seria...

  2. Auditory learning: a developmental method.

    Science.gov (United States)

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers. PMID:15940990

  3. Auditory sequence analysis and phonological skill.

    Science.gov (United States)

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  4. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  5. Temporal networks

    CERN Document Server

    Saramäki, Jari

    2013-01-01

    The concept of temporal networks is an extension of complex networks as a modeling framework to include information on when interactions between nodes happen. Many studies of the last decade examine how the static network structure affect dynamic systems on the network. In this traditional approach  the temporal aspects are pre-encoded in the dynamic system model. Temporal-network methods, on the other hand, lift the temporal information from the level of system dynamics to the mathematical representation of the contact network itself. This framework becomes particularly useful for cases where there is a lot of structure and heterogeneity both in the timings of interaction events and the network topology. The advantage compared to common static network approaches is the ability to design more accurate models in order to explain and predict large-scale dynamic phenomena (such as, e.g., epidemic outbreaks and other spreading phenomena). On the other hand, temporal network methods are mathematically and concept...

  6. Development of Receiver Stimulator for Auditory Prosthesis

    OpenAIRE

    K. Raja Kumar; P. Seetha Ramaiah

    2010-01-01

    The Auditory Prosthesis (AP) is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP) and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP) consists of Speech Data Decoder, DAC, ADC, constant...

  7. Auditory stimulation and cardiac autonomic regulation

    OpenAIRE

    Vitor E Valenti; Guida, Heraldo L.; Frizzo, Ana C F; Cardoso, Ana C. V.; Vanderlei, Luiz Carlos M; Luiz Carlos de Abreu

    2012-01-01

    Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation bet...

  8. Behavioural and neural correlates of auditory attention

    OpenAIRE

    Roberts, Katherine Leonie

    2005-01-01

    The auditory attention skills of alterting, orienting, and executive control were assessed using behavioural and neuroimaging techniques. Initially, an auditory analgue of the visual attention network test (ANT) (FAN, McCandliss, Sommer, Raz, & Posner, 2002) was created and tested alongside the visual ANT in a group of 40 healthy subjects. The results from this study showed similarities between auditory and visual spatial orienting. An fMRI study was conducted to investigate whether the simil...

  9. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  10. Corticofugal modulation of peripheral auditory responses

    Directory of Open Access Journals (Sweden)

    Paul Hinckley Delano

    2015-09-01

    Full Text Available The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body, inferior colliculus, cochlear nucleus and superior olivary complex reaching the cochlea through olivocochlear fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i colliculo-thalamic-cortico-collicular, (ii cortico-(collicular-olivocochlear and (iii cortico-(collicular-cochlear nucleus pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the olivocochlear reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on cochlear nucleus, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed.

  11. Corticofugal modulation of peripheral auditory responses.

    Science.gov (United States)

    Terreros, Gonzalo; Delano, Paul H

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647

  12. Proximal vocal threat recruits the right voice-sensitive auditory cortex.

    Science.gov (United States)

    Ceravolo, Leonardo; Frühholz, Sascha; Grandjean, Didier

    2016-05-01

    The accurate estimation of the proximity of threat is important for biological survival and to assess relevant events of everyday life. We addressed the question of whether proximal as compared with distal vocal threat would lead to a perceptual advantage for the perceiver. Accordingly, we sought to highlight the neural mechanisms underlying the perception of proximal vs distal threatening vocal signals by the use of functional magnetic resonance imaging. Although we found that the inferior parietal and superior temporal cortex of human listeners generally decoded the spatial proximity of auditory vocalizations, activity in the right voice-sensitive auditory cortex was specifically enhanced for proximal aggressive relative to distal aggressive voices as compared with neutral voices. Our results shed new light on the processing of imminent danger signaled by proximal vocal threat and show the crucial involvement of the right mid voice-sensitive auditory cortex in such processing. PMID:26746180

  13. Loss of auditory sensitivity from inner hair cell synaptopathy can be centrally compensated in the young but not old brain.

    Science.gov (United States)

    Möhrle, Dorit; Ni, Kun; Varakina, Ksenya; Bing, Dan; Lee, Sze Chim; Zimmermann, Ulrike; Knipper, Marlies; Rüttiger, Lukas

    2016-08-01

    A dramatic shift in societal demographics will lead to rapid growth in the number of older people with hearing deficits. Poorer performance in suprathreshold speech understanding and temporal processing with age has been previously linked with progressing inner hair cell (IHC) synaptopathy that precedes age-dependent elevation of auditory thresholds. We compared central sound responsiveness after acoustic trauma in young, middle-aged, and older rats. We demonstrate that IHC synaptopathy progresses from middle age onward and hearing threshold becomes elevated from old age onward. Interestingly, middle-aged animals could centrally compensate for the loss of auditory fiber activity through an increase in late auditory brainstem responses (late auditory brainstem response wave) linked to shortening of central response latencies. In contrast, old animals failed to restore central responsiveness, which correlated with reduced temporal resolution in responding to amplitude changes. These findings may suggest that cochlear IHC synaptopathy with age does not necessarily induce temporal auditory coding deficits, as long as the capacity to generate neuronal gain maintains normal sound-induced central amplitudes. PMID:27318145

  14. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  15. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    Directory of Open Access Journals (Sweden)

    Karsten Specht

    2013-10-01

    Full Text Available Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic imaging (fMRI studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesised, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a “lateralisation” gradient. This increasing leftward lateralisation was particularly evident for the left superior temporal sulcus (STS and more anterior parts of the temporal lobe.

  16. Auditory hallucinations suppressed by etizolam in a patient with schizophrenia.

    Science.gov (United States)

    Benazzi, F; Mazzoli, M; Rossi, E

    1993-10-01

    A patient presented with a 15 year history of schizophrenia with auditory hallucinations. Though unresponsive to prolonged trials of neuroleptics, the auditory hallucinations disappeared with etizolam. PMID:7902201

  17. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  18. Quantitative cerebral perfusion assessment using microscope-integrated analysis of intraoperative indocyanine green fluorescence angiography versus positron emission tomography in superficial temporal artery to middle cerebral artery anastomosis

    Directory of Open Access Journals (Sweden)

    Shinya Kobayashi

    2014-01-01

    Full Text Available Background: Intraoperative qualitative indocyanine green (ICG angiography has been used in cerebrovascular surgery. Hyperperfusion may lead to neurological complications after superficial temporal artery to middle cerebral artery (STA-MCA anastomosis. The purpose of this study is to quantitatively evaluate intraoperative cerebral perfusion using microscope-integrated dynamic ICG fluorescence analysis, and to assess whether this value predicts hyperperfusion syndrome (HPS after STA-MCA anastomosis. Methods: Ten patients undergoing STA-MCA anastomosis due to unilateral major cerebral artery occlusive disease were included. Ten patients with normal cerebral perfusion served as controls. The ICG transit curve from six regions of interest (ROIs on the cortex, corresponding to ROIs on positron emission tomography (PET study, was recorded. Maximum intensity (I MAX , cerebral blood flow index (CBFi, rise time (RT, and time to peak (TTP were evaluated. Results: RT/TTP, but not I MAX or CBFi, could differentiate between control and study subjects. RT/TTP correlated (|r| = 0.534-0.807; P < 0.01 with mean transit time (MTT/MTT ratio in the ipsilateral to contralateral hemisphere by PET study. Bland-Altman analysis showed a wide limit of agreement between RT and MTT and between TTP and MTT. The ratio of RT before and after bypass procedures was significantly lower in patients with postoperative HPS than in patients without postoperative HPS (0.60 ± 0.032 and 0.80 ± 0.056, respectively; P = 0.017. The ratio of TTP was also significantly lower in patients with postoperative HPS than in patients without postoperative HPS (0.64 ± 0.081 and 0.85 ± 0.095, respectively; P = 0.017. Conclusions: Time-dependent intraoperative parameters from the ICG transit curve provide quantitative information regarding cerebral circulation time with quality and utility comparable to information obtained by PET. These parameters may help predict the occurrence of postoperative

  19. Project Temporalities

    DEFF Research Database (Denmark)

    Tryggestad, Kjell; Justesen, Lise; Mouritsen, Jan

    2013-01-01

    Purpose – The purpose of this paper is to explore how animals can become stakeholders in interaction with project management technologies and what happens with project temporalities when new and surprising stakeholders become part of a project and a recognized matter of concern to be taken...... into account. Design/methodology/approach – The paper is based on a qualitative case study of a project in the building industry. The authors use actor-network theory (ANT) to analyze the emergence of animal stakeholders, stakes and temporalities. Findings – The study shows how project temporalities can...... multiply in interaction with project management technologies and how conventional linear conceptions of project time may be contested with the emergence of new non-human stakeholders and temporalities. Research limitations/implications – The study draws on ANT to show how animals can become stakeholders...

  20. Rapid context-based identification of target sounds in an auditory scene

    Science.gov (United States)

    Gamble, Marissa L.; Woldorff, Marty G.

    2015-01-01

    To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. While it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff (2014) recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning ~60 ms) between feature-deviating Target stimuli and physically equivalent feature-deviating Nontargets, reflecting a rapid Target-detection process. This was followed shortly later (~130 ms) by the lateralized N2ac ERP activation, reflecting the focusing of auditory spatial attention toward the Target sound and paralleling attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, Target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included Target and Nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the Target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for subjects with faster response times. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a Target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound. PMID:25848684

  1. Effects of auditory training in individuals with high-frequency hearing loss

    Directory of Open Access Journals (Sweden)

    Renata Beatriz Fernandes Santos

    2014-01-01

    Full Text Available OBJECTIVE: To determine the effects of a formal auditory training program on the behavioral, electrophysiological and subjective aspects of auditory function in individuals with bilateral high-frequency hearing loss. METHOD: A prospective study of seven individuals aged 46 to 57 years with symmetric, moderate high-frequency hearing loss ranging from 3 to 8 kHz was conducted. Evaluations of auditory processing (sound location, verbal and non-verbal sequential memory tests, the speech-in-noise test, the staggered spondaic word test, synthetic sentence identification with competitive ipsilateral and contralateral competitive messages, random gap detection and the standard duration test, auditory brainstem response and long-latency potentials and the administration of the Abbreviated Profile of Hearing Aid Benefit questionnaire were performed in a sound booth before and immediately after formal auditory training. RESULTS: All of the participants demonstrated abnormal pre-training long-latency characteristics (abnormal latency or absence of the P3 component and these abnormal characteristics were maintained in six of the seven individuals at the post-training evaluation. No significant differences were found between ears in the quantitative analysis of auditory brainstem responses or long-latency potentials. However, the subjects demonstrated improvements on all behavioral tests. For the questionnaire, the difference on the background noise subscale achieved statistical significance. CONCLUSION: Auditory training in adults with high-frequency hearing loss led to improvements in figure-background hearing skills for verbal sounds, temporal ordination and resolution, and communication in noisy environments. Electrophysiological changes were also observed because, after the training, some long latency components that were absent pre-training were observed during the re-evaluation.

  2. Integration of spatial and temporal data for the definition of different landslide hazard scenarios in the area north of Lisbon (Portugal

    Directory of Open Access Journals (Sweden)

    J. L. Zêzere

    2004-01-01

    Full Text Available A general methodology for the probabilistic evaluation of landslide hazard is applied, taking in account both the landslide susceptibility and the instability triggering factors, mainly rainfall. The method is applied in the Fanhões-Trancão test site (north of Lisbon, Portugal where 100 shallow translational slides were mapped and integrated into a GIS database. For the landslide susceptibility assessment it is assumed that future landslides can be predicted by statistical relationships between past landslides and the spatial data set of the predisposing factors (slope angle, slope aspect, transversal slope profile, lithology, superficial deposits, geomorphology, and land use. Susceptibility is evaluated using algorithms based on statistical/probabilistic analysis (Bayesian model over unique-condition terrain units in a raster basis. The landslide susceptibility map is prepared by sorting all pixels according to the pixel susceptibility value in descending order. In order to validate the results of the susceptibility ana- lysis, the landslide data set is divided in two parts, using a temporal criterion. The first subset is used for obtaining a prediction image and the second subset is compared with the prediction results for validation. The obtained prediction-rate curve is used for the quantitative interpretation of the initial susceptibility map. Landslides in the study area are triggered by rainfall. The integration of triggering information in hazard assessment includes (i the definition of thresholds of rainfall (quantity-duration responsible for past landslide events; (ii the calculation of the relevant return periods; (iii the assumption that the same rainfall patterns (quantity/duration which produced slope instability in the past will produce the same effects in the future (i.e. same types of landslides and same total affected area. The landslide hazard is present as the probability of each pixel to be affected by a slope movement

  3. Near-Term Fetuses Process Temporal Features of Speech

    Science.gov (United States)

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  4. Spatio-temporal Data Model Based on Relational Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the entity-relation data model for integrating spatio-temporal data is designed.In the design,spatio-temporal data can be effectively stored and spatiao-temporal analysis can be easily realized.

  5. Visual-auditory differences in duration discrimination of intervals in the subsecond and second range

    Directory of Open Access Journals (Sweden)

    Thomas eRammsayer

    2015-10-01

    Full Text Available A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration and longer (1000-ms standard duration intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the one-second range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the one-second range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.

  6. Dichotic auditory-verbal memory in adults with cerebro-vascular accident

    Directory of Open Access Journals (Sweden)

    Samaneh Yekta

    2014-01-01

    Full Text Available Background and Aim: Cerebrovascular accident is a neurological disorder involves central nervous system. Studies have shown that it affects the outputs of behavioral auditory tests such as dichotic auditory verbal memory test. The purpose of this study was to compare this memory test results between patients with cerebrovascular accident and normal subjects.Methods: This cross-sectional study was conducted on 20 patients with cerebrovascular accident aged 50-70 years and 20 controls matched for age and gender in Emam Khomeini Hospital, Tehran, Iran. Dichotic auditory verbal memory test was performed on each subject.Results: The mean score in the two groups was significantly different (p<0.0001. The results indicated that the right-ear score was significantly greater than the left-ear score in normal subjects (p<0.0001 and in patients with right hemisphere lesion (p<0.0001. The right-ear and left-ear scores were not significantly different in patients with left hemisphere lesion (p=0.0860.Conclusion: Among other methods, Dichotic auditory verbal memory test is a beneficial test in assessing the central auditory nervous system of patients with cerebrovascular accident. It seems that it is sensitive to the damages occur following temporal lobe strokes.

  7. Neuronal connectivity and interactions between the auditory and limbic systems. Effects of noise and tinnitus.

    Science.gov (United States)

    Kraus, Kari Suzanne; Canlon, Barbara

    2012-06-01

    Acoustic experience such as sound, noise, or absence of sound induces structural or functional changes in the central auditory system but can also affect limbic regions such as the amygdala and hippocampus. The amygdala is particularly sensitive to sound with valence or meaning, such as vocalizations, crying or music. The amygdala plays a central role in auditory fear conditioning, regulation of the acoustic startle response and can modulate auditory cortex plasticity. A stressful acoustic stimulus, such as noise, causes amygdala-mediated release of stress hormones via the HPA-axis, which may have negative effects on health, as well as on the central nervous system. On the contrary, short-term exposure to stress hormones elicits positive effects such as hearing protection. The hippocampus can affect auditory processing by adding a temporal dimension, as well as being able to mediate novelty detection via theta wave phase-locking. Noise exposure affects hippocampal neurogenesis and LTP in a manner that affects structural plasticity, learning and memory. Tinnitus, typically induced by hearing malfunctions, is associated with emotional stress, depression and anatomical changes of the hippocampus. In turn, the limbic system may play a role in the generation as well as the suppression of tinnitus indicating that the limbic system may be essential for tinnitus treatment. A further understanding of auditory-limbic interactions will contribute to future treatment strategies of tinnitus and noise trauma. PMID:22440225

  8. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  9. Auditory brainstem response in dolphins.

    OpenAIRE

    Ridgway, S. H.; Bullock, T H; Carder, D.A.; Seeley, R L; Woods, D.; Galambos, R

    1981-01-01

    We recorded the auditory brainstem response (ABR) in four dolphins (Tursiops truncatus and Delphinus delphis). The ABR evoked by clicks consists of seven waves within 10 msec; two waves often contain dual peaks. The main waves can be identified with those of humans and laboratory mammals; in spite of a much longer path, the latencies of the peaks are almost identical to those of the rat. The dolphin ABR waves increase in latency as the intensity of a sound decreases by only 4 microseconds/dec...

  10. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  11. Sensorimotor learning in children and adults: Exposure to frequency-altered auditory feedback during speech production.

    Science.gov (United States)

    Scheerer, N E; Jacobson, D S; Jones, J A

    2016-02-01

    Auditory feedback plays an important role in the acquisition of fluent speech; however, this role may change once speech is acquired and individuals no longer experience persistent developmental changes to the brain and vocal tract. For this reason, we investigated whether the role of auditory feedback in sensorimotor learning differs across children and adult speakers. Participants produced vocalizations while they heard their vocal pitch predictably or unpredictably shifted downward one semitone. The participants' vocal pitches were measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback modified subsequent speech motor commands. Sensorimotor learning was observed in both children and adults, with participants' initial vocal pitch increasing following trials where they were exposed to predictable, but not unpredictable, frequency-altered feedback. Participants' vocal pitch was also measured across each vocalization, to index the extent to which the deviant auditory feedback was used to modify ongoing vocalizations. While both children and adults were found to increase their vocal pitch following predictable and unpredictable changes to their auditory feedback, adults produced larger compensatory responses. The results of the current study demonstrate that both children and adults rapidly integrate information derived from their auditory feedback to modify subsequent speech motor commands. However, these results also demonstrate that children and adults differ in their ability to use auditory feedback to generate compensatory vocal responses during ongoing vocalization. Since vocal variability also differed across the children and adult groups, these results also suggest that compensatory vocal responses to frequency-altered feedback manipulations initiated at vocalization onset may be modulated by vocal variability. PMID:26628403

  12. Multi-Scale Entrainment of Coupled Neuronal Oscillations in Primary Auditory Cortex.

    Science.gov (United States)

    O'Connell, M N; Barczak, A; Ross, D; McGinnis, T; Schroeder, C E; Lakatos, P

    2015-01-01

    Earlier studies demonstrate that when the frequency of rhythmic tone sequences or streams is task relevant, ongoing excitability fluctuations (oscillations) of neuronal ensembles in primary auditory cortex (A1) entrain to stimulation in a frequency dependent way that sharpens frequency tuning. The phase distribution across A1 neuronal ensembles at time points when attended stimuli are predicted to occur reflects the focus of attention along the spectral attribute of auditory stimuli. This study examined how neuronal activity is modulated if only the temporal features of rhythmic stimulus streams are relevant. We presented macaques with auditory clicks arranged in 33 Hz (gamma timescale) quintets, repeated at a 1.6 Hz (delta timescale) rate. Such multi-scale, hierarchically organized temporal structure is characteristic of vocalizations and other natural stimuli. Monkeys were required to detect and respond to deviations in the temporal pattern of gamma quintets. As expected, engagement in the auditory task resulted in the multi-scale entrainment of delta- and gamma-band neuronal oscillations across all of A1. Surprisingly, however, the phase-alignment, and thus, the physiological impact of entrainment differed across the tonotopic map in A1. In the region of 11-16 kHz representation, entrainment most often aligned high excitability oscillatory phases with task-relevant events in the input stream and thus resulted in response enhancement. In the remainder of the A1 sites, entrainment generally resulted in response suppression. Our data indicate that the suppressive effects were due to low excitability phase delta oscillatory entrainment and the phase amplitude coupling of delta and gamma oscillations. Regardless of the phase or frequency, entrainment appeared stronger in left A1, indicative of the hemispheric lateralization of auditory function. PMID:26696866

  13. Multi-scale entrainment of coupled neuronal oscillations in primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Monica Noelle O'Connell

    2015-12-01

    Full Text Available Earlier studies demonstrate that when the frequency of rhythmic tone sequences or streams is task relevant, ongoing excitability fluctuations (oscillations of neuronal ensembles in primary auditory cortex (A1 entrain to stimulation in a frequency dependent way that sharpens frequency tuning. The phase distribution across A1 neuronal ensembles at time points when attended stimuli are predicted to occur reflects the focus of attention along the spectral attribute of auditory stimuli. This study examined how neuronal activity is modulated if only the temporal features of rhythmic stimulus streams are relevant. We presented macaques with auditory clicks arranged in 33 Hz (gamma timescale quintets, repeated at a 1.6 Hz (delta timescale rate. Such multi-scale, hierarchically organized temporal structure is characteristic of vocalizations and other natural stimuli. Monkeys were required to detect and respond to deviations in the temporal pattern of gamma quintets. As expected, engagement in the auditory task resulted in the multi-scale entrainment of delta- and gamma-band neuronal oscillations across all of A1. Surprisingly, however, the phase-alignment, and thus, the physiological impact of entrainment differed across the tonotopic map in A1. In the region of 11-16 kHz representation, entrainment most often aligned high excitability oscillatory phases with task-relevant events in the input stream and thus resulted in response enhancement. In the remainder of the A1 sites, entrainment generally resulted in response suppression. Our data indicate that the suppressive effects were due to low excitability phase delta oscillatory entrainment and the phase amplitude coupling of delta and gamma oscillations. Regardless of the phase or frequency, entrainment appeared stronger in left A1, indicative of the hemispheric lateralization of auditory function.

  14. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  15. Direct recordings from the auditory cortex in a cochlear implant user.

    Science.gov (United States)

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings. PMID:23519390

  16. The peripheral auditory characteristics of noctuid moths: responses to the search-phase echolocation calls of bats

    Science.gov (United States)

    Waters; Jones

    1996-01-01

    The noctuid moths Agrotis segetum and Noctua pronuba show peak auditory sensitivity between 15 and 25 kHz, and a maximum sensitivity of 35 dB SPL. A. segetum shows a temporal integration time of 69 ms. It is predicted that bats using high-frequency and short-duration calls will be acoustically less apparent to these moths. Short-duration frequency-modulated (FM) calls of Plecotus auritus are not significantly less acoustically apparent than those of other FM bats with slightly longer call durations, based on their combined frequency and temporal structure alone. Long-duration, high-frequency, constant-frequency (CF) calls of Rhinolophus hipposideros at 113 kHz are significantly less apparent than those of the FM bats tested. The predicted low call apparency of the 83 kHz CF calls of R. ferrumequinum appears to be counteracted by their long duration. It is proposed that two separate mechanisms are exploited by bats to reduce their call apparency, low intensity in FM bats and high frequency in CF bats. Within the FM bats tested, shorter-duration calls do not significantly reduce the apparency of the call at the peripheral level, though they may limit the amount of information available to the central nervous system. PMID:9318627

  17. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Yakunina, Natalia [Kangwon National University, Institute of Medical Science, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kang, Eun Kyoung [Kangwon National University Hospital, Department of Rehabilitation Medicine, Chuncheon (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of); Min, Ji-Hoon [University of Michigan, Department of Biopsychology, Cognition, and Neuroscience, Ann Arbor, MI (United States); Kim, Sam Soo [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Radiology, Chuncheon (Korea, Republic of); Nam, Eui-Cheol [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of)

    2015-10-15

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  18. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    International Nuclear Information System (INIS)

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  19. Effect of human auditory efferent feedback on cochlear gain and compression.

    Science.gov (United States)

    Yasin, Ifat; Drga, Vit; Plack, Christopher J

    2014-11-12

    The mammalian auditory system includes a brainstem-mediated efferent pathway from the superior olivary complex by way of the medial olivocochlear system, which reduces the cochlear response to sound (Warr and Guinan, 1979; Liberman et al., 1996). The human medial olivocochlear response has an onset delay of between 25 and 40 ms and rise and decay constants in the region of 280 and 160 ms, respectively (Backus and Guinan, 2006). Physiological studies with nonhuman mammals indicate that onset and decay characteristics of efferent activation are dependent on the temporal and level characteristics of the auditory stimulus (Bacon and Smith, 1991; Guinan and Stankovic, 1996). This study uses a novel psychoacoustical masking technique using a precursor sound to obtain a measure of the efferent effect in humans. This technique avoids confounds currently associated with other psychoacoustical measures. Both temporal and level dependency of the efferent effect was measured, providing a comprehensive measure of the effect of human auditory efferents on cochlear gain and compression. Results indicate that a precursor (>20 dB SPL) induced efferent activation, resulting in a decrease in both maximum gain and maximum compression, with linearization of the compressive function for input sound levels between 50 and 70 dB SPL. Estimated gain decreased as precursor level increased, and increased as the silent interval between the precursor and combined masker-signal stimulus increased, consistent with a decay of the efferent effect. Human auditory efferent activation linearizes the cochlear response for mid-level sounds while reducing maximum gain. PMID:25392499

  20. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587