WorldWideScience

Sample records for auditory temporal integration

  1. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    , much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...

  2. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  3. Temporal Lobe Epilepsy Alters Auditory-motor Integration For Voice Control

    Science.gov (United States)

    Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun

    2016-01-01

    Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768

  4. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  5. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  6. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  7. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  8. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h

  9. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  10. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson’s Disease

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1–3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion–extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  11. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson’s Disease

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1–3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion–extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments. PMID:27563296

  12. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  13. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...... on the stream segregation process was analysed. The model analysis showed that auditory frequency selectivity and physiological forward masking play a significant role in stream segregation based on frequency separation and tone rate. Secondly, the model analysis suggested that neural adaptation...

  14. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  15. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  16. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  17. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949

  18. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  19. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  20. Left temporal lobe structural and functional abnormality underlying auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Kenneth Hugdahl

    2009-05-01

    Full Text Available In this article, we review recent findings from our laboratory that auditory hallucinations in schizophrenia are internally generated speech mis-representations lateralized to the left superior temporal gyrus and sulcus. Such experiences are, moreover, not cognitively suppressed due to enhanced attention to the voices and failure of fronto-parietal executive control functions. An overview of diagnostic questionnaires for scoring of symptoms is presented, together with a review of behavioural, structural and functional MRI data. Functional imaging data have either shown increased or decreased activation depending on whether patients have been presented an external stimulus or not during scanning. Structural imaging data have shown reduction of grey matter density and volume in the same areas in the temporal lobe. The behavioral and neuroimaging findings are moreover hypothesized to be related to glutamate hypofunction in schizophrenia. We propose a model for the understanding of auditory hallucinations that trace the origin of auditory hallucinations to uncontrolled neuronal firing in the speech areas in the left temporal lobe, which is not suppressed by volitional cognitive control processes, due to dysfunctional fronto-parietal executive cortical networks.

  1. Rapid auditory learning of temporal gap detection.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2016-07-01

    The rapid initial phase of training-induced improvement has been shown to reflect a genuine sensory change in perception. Several features of early and rapid learning, such as generalization and stability, remain to be characterized. The present study demonstrated that learning effects from brief training on a temporal gap detection task using spectrally similar narrowband noise markers defining the gap (within-channel task), transfer across ears, however, not across spectrally dissimilar markers (between-channel task). The learning effects associated with brief training on a gap detection task were found to be stable for at least a day. These initial findings have significant implications for characterizing early and rapid learning effects. PMID:27475211

  2. Temporal coding by populations of auditory receptor neurons.

    Science.gov (United States)

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  3. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Science.gov (United States)

    Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C

    2015-01-01

    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys. PMID:26041980

  4. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus;

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  5. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  6. Repetition suppression in auditory-motor regions to pitch and temporal structure in music.

    Science.gov (United States)

    Brown, Rachel M; Chen, Joyce L; Hollinger, Avrum; Penhune, Virginia B; Palmer, Caroline; Zatorre, Robert J

    2013-02-01

    Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.

  7. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  8. A Pilot Study of Auditory Integration Training in Autism.

    Science.gov (United States)

    Rimland, Bernard; Edelson, Stephen M.

    1995-01-01

    The effectiveness of Auditory Integration Training (AIT) in 8 autistic individuals (ages 4-21) was evaluated using repeated multiple criteria assessment over a 3-month period. Compared to matched controls, subjects' scores improved on the Aberrant Behavior Checklist and Fisher's Auditory Problems Checklist. AIT did not decrease sound sensitivity.…

  9. Spectral features control temporal plasticity in auditory cortex.

    Science.gov (United States)

    Kilgard, M P; Pandya, P K; Vazquez, J L; Rathbun, D L; Engineer, N D; Moucha, R

    2001-01-01

    Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.

  10. Carrier-dependent temporal processing in an auditory interneuron.

    Science.gov (United States)

    Sabourin, Patrick; Gottlieb, Heather; Pollack, Gerald S

    2008-05-01

    Signal processing in the auditory interneuron Omega Neuron 1 (ON1) of the cricket Teleogryllus oceanicus was compared at high- and low-carrier frequencies in three different experimental paradigms. First, integration time, which corresponds to the time it takes for a neuron to reach threshold when stimulated at the minimum effective intensity, was found to be significantly shorter at high-carrier frequency than at low-carrier frequency. Second, phase locking to sinusoidally amplitude modulated signals was more efficient at high frequency, especially at high modulation rates and low modulation depths. Finally, we examined the efficiency with which ON1 detects gaps in a constant tone. As reflected by the decrease in firing rate in the vicinity of the gap, ON1 is better at detecting gaps at low-carrier frequency. Following a gap, firing rate increases beyond the pre-gap level. This "rebound" phenomenon is similar for low- and high-carrier frequencies.

  11. Neural Representations of Complex Temporal Modulations in the Human Auditory Cortex

    OpenAIRE

    Ding, Nai; Simon, Jonathan Z.

    2009-01-01

    Natural sounds such as speech contain multiple levels and multiple types of temporal modulations. Because of nonlinearities of the auditory system, however, the neural response to multiple, simultaneous temporal modulations cannot be predicted from the neural responses to single modulations. Here we show the cortical neural representation of an auditory stimulus simultaneously frequency modulated (FM) at a high rate, fFM ≈ 40 Hz, and amplitude modulation (AM) at a slow rate, fAM

  12. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties. PMID:24264076

  13. Deficit of auditory temporal processing in children with dyslexia-dysgraphia

    Directory of Open Access Journals (Sweden)

    Sima Tajik

    2012-12-01

    Full Text Available Background and Aim: Auditory temporal processing reveals an important aspect of auditory performance, in which a deficit can prevent the child from speaking, language learning and reading. Temporal resolution, which is a subgroup of temporal processing, can be evaluated by gap-in-noise detection test. Regarding the relation of auditory temporal processing deficits and phonologic disorder of children with dyslexia-dysgraphia, the aim of this study was to evaluate these children with the gap-in-noise (GIN test.Methods: The gap-in-noise test was performed on 28 normal and 24 dyslexic-dysgraphic children, at the age of 11-12 years old. Mean approximate threshold and percent of corrected answers were compared between the groups.Results: The mean approximate threshold and percent of corrected answers of the right and left ear had no significant difference between the groups (p>0.05. The mean approximate threshold of children with dyslexia-dysgraphia (6.97 ms, SD=1.09 was significantly (p<0.001 more than that of the normal group (5.05 ms, SD=0.92. The mean related frequency of corrected answers (58.05, SD=4.98% was less than normal group (69.97, SD=7.16% (p<0.001.Conclusion: Abnormal temporal resolution was found in children with dyslexia-dysgraphia based on gap-in-noise test. While the brainstem and auditory cortex are responsible for auditory temporal processing, probably the structural and functional differences of these areas in normal and dyslexic-dysgraphic children lead to abnormal coding of auditory temporal information. As a result, auditory temporal processing is inevitable.

  14. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  15. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery.

  16. Effects of different types of auditory temporal training on language skills: a systematic review

    Directory of Open Access Journals (Sweden)

    Cristina Ferraz Borges Murphy

    2013-10-01

    Full Text Available Previous studies have investigated the effects of auditory temporal training on language disorders. Recently, the effects of new approaches, such as musical training and the use of software, have also been considered. To investigate the effects of different auditory temporal training approaches on language skills, we reviewed the available literature on musical training, the use of software and formal auditory training by searching the SciELO, MEDLINE, LILACS-BIREME and EMBASE databases. Study Design: Systematic review. Results: Using evidence levels I and II as the criteria, 29 of the 523 papers found were deemed relevant to one of the topics (use of software - 13 papers; formal auditory training - six papers; and musical training - 10 papers. Of the three approaches, studies that investigated the use of software and musical training had the highest levels of evidence; however, these studies also raised concerns about the hypothesized relationship between auditory temporal processing and language. Future studies are necessary to investigate the actual contribution of these three types of auditory temporal training to language skills.

  17. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  18. Temporal asymmetries in auditory coding and perception reflect multi-layered nonlinearities.

    Science.gov (United States)

    Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice

    2016-01-01

    Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932

  19. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    Science.gov (United States)

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  20. Temporal feature integration for music genre classification

    OpenAIRE

    Meng, Anders; Ahrendt, Peter; Larsen, Jan; Hansen, Lars Kai

    2007-01-01

    Temporal feature integration is the process of combining all the feature vectors in a time window into a single feature vector in order to capture the relevant temporal information in the window. The mean and variance along the temporal dimension are often used for temporal feature integration, but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music ge...

  1. Temporal pattern recognition based on instantaneous spike rate coding in a simple auditory system.

    Science.gov (United States)

    Nabatiyan, A; Poulet, J F A; de Polavieja, G G; Hedwig, B

    2003-10-01

    Auditory pattern recognition by the CNS is a fundamental process in acoustic communication. Because crickets communicate with stereotyped patterns of constant frequency syllables, they are established models to investigate the neuronal mechanisms of auditory pattern recognition. Here we provide evidence that for the neural processing of amplitude-modulated sounds, the instantaneous spike rate rather than the time-averaged neural activity is the appropriate coding principle by comparing both coding parameters in a thoracic interneuron (Omega neuron ON1) of the cricket (Gryllus bimaculatus) auditory system. When stimulated with different temporal sound patterns, the analysis of the instantaneous spike rate demonstrates that the neuron acts as a low-pass filter for syllable patterns. The instantaneous spike rate is low at high syllable rates, but prominent peaks in the instantaneous spike rate are generated as the syllable rate resembles that of the species-specific pattern. The occurrence and repetition rate of these peaks in the neuronal discharge are sufficient to explain temporal filtering in the cricket auditory pathway as they closely match the tuning of phonotactic behavior to different sound patterns. Thus temporal filtering or "pattern recognition" occurs at an early stage in the auditory pathway.

  2. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  3. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe.

    Science.gov (United States)

    Berns, Gregory S; Cook, Peter F; Foxley, Sean; Jbabdi, Saad; Miller, Karla L; Marino, Lori

    2015-07-22

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative' regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species

  4. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  5. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of aud...

  6. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  7. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  8. An auditory illusion of infinite tempo change based on multiple temporal levels.

    Directory of Open Access Journals (Sweden)

    Guy Madison

    Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.

  9. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  10. Temporal Feature Integration for Music Organisation

    OpenAIRE

    Meng, Anders; Larsen, Jan; Hansen, Lars Kai

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisa...

  11. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. PMID:27255816

  12. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging.

  13. Resolução temporal auditiva em idosos Auditory temporal resolution in elderly people

    Directory of Open Access Journals (Sweden)

    Flávia Duarte Liporaci

    2010-12-01

    Full Text Available OBJETIVO: Avaliar o processamento auditivo em idosos por meio do teste de resolução temporal Gaps in Noise e verificar se a presença de perda auditiva influencia no desempenho nesse teste. MÉTODOS: Sessenta e cinco ouvintes idosos, entre 60 e 79 anos, foram avaliados por meio do teste Gaps In Noise. Para seleção da amostra foram realizados: anamnese, mini-exame do estado mental e avaliação audiológica básica. Os participantes foram alocados e estudados em um grupo único e posteriormente divididos em três grupos segundo os resultados audiométricos nas frequências de 500 Hz, 1, 2, 3, 4 e 6 kHz. Assim, classificou-se o G1 com audição normal, o G2 com perda auditiva de grau leve e o G3 com perda auditiva de grau moderado. RESULTADOS: Em toda a amostra, as médias de limiar de detecção de gap e de porcentagem de acertos foram de 8,1 ms e 52,6% para a orelha direita e de 8,2 ms e 52,2% para a orelha esquerda. No G1, estas medidas foram de 7,3 ms e 57,6% para a orelha direita e de 7,7 ms e 55,8% para a orelha esquerda. No G2, estas medidas foram de 8,2 ms e 52,5% para a orelha direita e de 7,9 ms e 53,2% para a orelha esquerda. No G3, estas medidas foram de 9,2 ms e 45,2% para as orelhas direita e esquerda. CONCLUSÃO: A presença de perda auditiva elevou os limiares de detecção de gap e diminuiu a porcentagem de acertos no teste Gaps In Noise.PURPOSE: To assess the auditory processing of elderly patients using the temporal resolution Gaps-in-Noise test, and to verify if the presence of hearing loss influences the performance on this test. METHODS: Sixty-five elderly listeners, with ages between 60 and 79 years, were assessed with the Gaps-in-Noise test. To meet the inclusion criteria, the following procedures were carried out: anamnesis, mini-mental state examination, and basic audiological evaluation. The participants were allocated and studied as a group, and then were divided into three groups, according to audiometric results

  14. Visual-auditory integration for visual search: a behavioral study in barn owls

    OpenAIRE

    Yael eHazan; Inna eYarin; Yonatan eKra; Hermann eWagner; Yoram eGutfreund

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient st...

  15. Temporal Feature Integration for Music Organisation

    DEFF Research Database (Denmark)

    Meng, Anders

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods...... for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisation. Human evaluations of these, have been obtained to access the subjectivity on the datasets. Temporal...... ranking' approach is proposed for ranking the short-time features at larger time-scales according to their discriminative power in a music genre classification task. The multivariate AR (MAR) model has been proposed for temporal feature integration. It effectively models local dynamical structure...

  16. The efficacy of the Berard Auditory Integration Training method for learners with attention difficulties / Hannelie Kemp

    OpenAIRE

    Kemp, Johanna Jacoba

    2010-01-01

    Research on the Berard Auditory Integration Training method has shown improvement in the regulation of attention, activity and impulsivity of children whose auditory system have been re-trained. Anecdotal reports have found improvements in sleeping patterns, balance, allergies, eyesight, eating patterns, depression and other seemingly unrelated physiological states. During the Auditory Integration Training (AIT) procedure dynamic music, with a wide range of frequencies, is processed through a...

  17. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... their audibility when embedded in similar background interferers, a phenomenon referred to as comodulation masking release (CMR). Knowledge of the auditory processing of amplitude modulations provides therefore crucial information for a better understanding of how the auditory system analyses acoustic scenes...... chapter introduces a processing stage, in which information from different peripheral frequency channels is combined. This so-called across-channel processing is assumed to take place at the output of a modulation filterbank, and is crucial in order to account for CMR conditions where the frequency...

  18. A Phenomenological Model of the Electrically Stimulated Auditory Nerve Fiber: Temporal and Biphasic Response Properties.

    Science.gov (United States)

    Horne, Colin D F; Sumner, Christian J; Seeber, Bernhard U

    2016-01-01

    We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.

  19. Temporal feature integration for music genre classification

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan;

    2007-01-01

    , but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music genre classification. This model gives two different feature sets, the diagonal autoregressive (DAR......) and multivariate autoregressive (MAR) features which are compared against the baseline mean-variance as well as two other temporal feature integration techniques. Reproducibility in performance ranking of temporal feature integration methods were demonstrated using two data sets with five and eleven music genres...

  20. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  1. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria

    2016-09-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." PMID:27325682

  2. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  3. Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

    Science.gov (United States)

    Ellis, Robert J.; Duan, Zhiyan; Wang, Ye

    2014-01-01

    “Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal stability of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications. PMID:25469636

  4. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  5. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  6. A physiologically inspired model of auditory stream segregation based on a temporal coherence analysis

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2012-01-01

    The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422...... activity across frequency. Using this approach, the described model is able to quantitatively account for classical streaming phenomena relying on frequency separation and tone presentation rate, such as the temporal coherence boundary and the fission boundary [L. P. A. S. van Noorden, doctoral...... dissertation, Institute for Perception Research, Eindhoven, NL, (1975)]. The same model also accounts for the perceptual grouping of distant spectral components in the case of synchronous presentation. The most essential components of the front-end and back-end processing in the framework of the presented...

  7. Effects of deafness and cochlear implant use on temporal response characteristics in cat primary auditory cortex.

    Science.gov (United States)

    Fallon, James B; Shepherd, Robert K; Nayagam, David A X; Wise, Andrew K; Heffer, Leon F; Landry, Thomas G; Irvine, Dexter R F

    2014-09-01

    We have previously shown that neonatal deafness of 7-13 months duration leads to loss of cochleotopy in the primary auditory cortex (AI) that can be reversed by cochlear implant use. Here we describe the effects of a similar duration of deafness and cochlear implant use on temporal processing. Specifically, we compared the temporal resolution of neurons in AI of young adult normal-hearing cats that were acutely deafened and implanted immediately prior to recording with that in three groups of neonatally deafened cats. One group of neonatally deafened cats received no chronic stimulation. The other two groups received up to 8 months of either low- or high-rate (50 or 500 pulses per second per electrode, respectively) stimulation from a clinical cochlear implant, initiated at 10 weeks of age. Deafness of 7-13 months duration had no effect on the duration of post-onset response suppression, latency, latency jitter, or the stimulus repetition rate at which units responded maximally (best repetition rate), but resulted in a statistically significant reduction in the ability of units to respond to every stimulus in a train (maximum following rate). None of the temporal response characteristics of the low-rate group differed from those in acutely deafened controls. In contrast, high-rate stimulation had diverse effects: it resulted in decreased suppression duration, longer latency and greater jitter relative to all other groups, and an increase in best repetition rate and cut-off rate relative to acutely deafened controls. The minimal effects of moderate-duration deafness on temporal processing in the present study are in contrast to its previously-reported pronounced effects on cochleotopy. Much longer periods of deafness have been reported to result in significant changes in temporal processing, in accord with the fact that duration of deafness is a major factor influencing outcome in human cochlear implantees.

  8. Implicit learning of between-group intervals in auditory temporal structures.

    Science.gov (United States)

    Terry, J; Stevens, C J; Weidemann, G; Tillmann, B

    2016-08-01

    Implicit learning of temporal structure has primarily been reported when events within a sequence (e.g., visual-spatial locations, tones) are systematically ordered and correlated with the temporal structure. An auditory serial reaction time task was used to investigate implicit learning of temporal intervals between pseudorandomly ordered syllables. Over exposure, participants identified syllables presented in sequences with weakly metrical temporal structures. In a test block, the temporal structure differed from exposure only in the duration of the interonset intervals (IOIs) between groups. It was hypothesized that reaction time (RT) to syllables following between-group IOIs would decrease with exposure and increase at test. In Experiments 1 and 2, the sequences presented over exposure and test were counterbalanced across participants (Pattern 1 and Pattern 2 conditions). An RT increase at test to syllables following between-group IOIs was only evident in the condition that presented an exposure structure with a slightly stronger meter (Pattern 1 condition). The Pattern 1 condition also elicited a global expectancy effect: Test block RT slowed to earlier-than-expected syllables (i.e., syllables shifted to an earlier beat) but not to later-than-expected syllables. Learning of between-group IOIs and the global expectancy effect extended to the Pattern 2 condition when meter was strengthened with an external pulse (Experiment 2). Experiment 3 further demonstrated implicit learning of a new weakly metrical structure with only earlier-than-expected violations at test. Overall findings demonstrate learning of weakly metrical rhythms without correlated event structures (i.e., sequential syllable orders). They further suggest the presence of a global expectancy effect mediated by metrical strength. PMID:27301354

  9. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  10. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT

    Directory of Open Access Journals (Sweden)

    Sahar Mohammad Esmaeilzadeh

    2013-10-01

    Full Text Available Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT and play therapy (PT. There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children.   Materials and Methods: In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected.    Results: Recent technologies have brought about great advancement in the field of hearing disorders. Now these impairments can be detected at birth, and in the majority of cases, hearing impaired children can develop fluent spoken language through audition. According to researches on the relationship between hearing impaired children’s communication and language skills and different approaches of therapy, it is known that learning through listening and

  11. Auditory priming of frequency and temporal information: Effects of lateralized presentation

    OpenAIRE

    List, Alexandra; Justus, Timothy

    2007-01-01

    Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...

  12. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  13. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses. PMID:23119038

  14. Periodicity extraction in the anuran auditory nerve. II: Phase and temporal fine structure.

    Science.gov (United States)

    Simmons, A M; Reese, G; Ferragamo, M

    1993-06-01

    phase locking to simple sinusoids. Increasing stimulus intensity also shifts the synchronized responses of some fibers away from the fundamental frequency to one of the low-frequency harmonics in the stimuli. These data suggest that the synchronized firing of bullfrog eighth nerve fibers operates to extract the waveform periodicity of complex, multiple-harmonic stimuli, and this periodicity extraction is influenced by the phase spectrum and temporal fine structure of the stimuli. The similarity in response patterns of amphibian papilla and basilar papilla fibers argues that the frog auditory system employs primarily a temporal mechanism for extraction of first harmonic periodicity.

  15. Auditory processing performance in blind people Desempenho do processamento auditivo temporal em uma população de cegos

    Directory of Open Access Journals (Sweden)

    Ludmilla Vilas Boas

    2011-08-01

    Full Text Available Hearing has an important role in human development and social adaptation in blind people. OBJECTIVE: To evaluate the performance of temporal auditory processing in blind people; to characterize the temporal resolution ability; to characterize the temporal ordinance ability and to compare the performance of the study population in the applied tests. METHODS: Fifteen blind adults participated in this study. A cross-sectional study was undertaken; approval was obtained from the Pernambuco Catholic University Ethics Committee, no. 003/2008. RESULTS: Temporal auditory processing was excellent - the average composed threshold in the original RGDT version was 4. 98 ms; it was 50 ms for all frequencies in the expanded version. PPS and DPS results ranged from 95% to 100%. There were no quantitative differences in the comparison of tests; but oral reports suggested that the original RGDT original version was more difficult. CONCLUSIONS: The study sample performed well in temporal auditory processing; it also performed well in temporal resolution and ordinance abilitiesA audição exerce um papel importantíssimo no desenvolvimento e adaptação social dos pacientes cegos. OBJETIVOS: Avaliar o desempenho do processamento temporal de cegos; caracterizar a habilidade de resolução temporal, segundo tempo e frequência; a ordenação temporal de cegos usando o teste de padrão de frequência e comparar o desempenho da população estudada para os testes de processamento aplicados. METODOLOGIA: Participaram do estudo 12 adultos portadores de cegueira. O estudo foi do tipo transversal, aprovado pelo Comitê de Ética da Universidade Católica de Pernambuco sob nº 003/2008. Para a coleta de dados, foi utilizado o RGDT em suas duas versões e os testes de padrão de duração (TPD e de frequência (TPF. RESULTADOS: Foi evidenciado excelente desempenho para o processamento temporal, média de 4,98 para o limiar composto na versão original do RGDT e 50 ms de

  16. Electrophysiological Evidence for Incremental Lexical-Semantic Integration in Auditory Compound Comprehension

    Science.gov (United States)

    Koester, Dirk; Holle, Henning; Gunter, Thomas C.

    2009-01-01

    The present study investigated the time-course of semantic integration in auditory compound word processing. Compounding is a productive mechanism of word formation that is used frequently in many languages. Specifically, we examined whether semantic integration is incremental or is delayed until the head, the last constituent in German, is…

  17. Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns.

    Directory of Open Access Journals (Sweden)

    Andres Carrasco

    Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.

  18. Identified auditory neurons in the cricket Gryllus rubens: temporal processing in calling song sensitive units.

    Science.gov (United States)

    Farris, Hamilton E; Mason, Andrew C; Hoy, Ronald R

    2004-07-01

    This study characterizes aspects of the anatomy and physiology of auditory receptors and certain interneurons in the cricket Gryllus rubens. We identified an 'L'-shaped ascending interneuron tuned to frequencies > 15 kHz (57 dB SPL threshold at 20 kHz). Also identified were two intrasegmental 'omega'-shaped interneurons that were broadly tuned to 3-65 kHz, with best sensitivity to frequencies of the male calling song (5 kHz, 52 dB SPL). The temporal sensitivity of units excited by calling song frequencies were measured using sinusoidally amplitude modulated stimuli that varied in both modulation rate and depth, parameters that vary with song propagation distance and the number of singing males. Omega cells responded like low-pass filters with a time constant of 42 ms. In contrast, receptors significantly coded modulation rates up to the maximum rate presented (85 Hz). Whereas omegas required approximately 65% modulation depth at 45 Hz (calling song AM) to elicit significant synchrony coding, receptors tolerated a approximately 50% reduction in modulation depth up to 85 Hz. These results suggest that omega cells in G. rubens might not play a role in detecting song modulation per se at increased distances from a singing male.

  19. Encoding of sound localization cues by an identified auditory interneuron: effects of stimulus temporal pattern.

    Science.gov (United States)

    Samson, Annie-Hélène; Pollack, Gerald S

    2002-11-01

    An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for

  20. Diffusion tensor imaging of dolphin brains reveals direct auditory pathway to temporal lobe

    OpenAIRE

    Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori

    2015-01-01

    The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this...

  1. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  2. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  3. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    OpenAIRE

    Sugihara, T.; Diltz, M. D.; Averbeck, B. B.; Romanski, L. M.

    2006-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The mul...

  4. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  5. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  6. Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure.

    Science.gov (United States)

    Moore, Brian C J

    2016-01-01

    Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each of which can be considered as a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV and TFS is conveyed in the timing and short-term rate of nerve spikes in the auditory nerve. There is evidence that both hearing loss and increasing age adversely affect the ability to use TFS information, but in many studies the effects of hearing loss and age have been confounded. This paper summarises evidence from studies that allow some separation of the effects of hearing loss and age. The results suggest that the monaural processing of TFS information, which is important for the perception of pitch and for segregating speech from background sounds, is adversely affected by both hearing loss and increasing age, the former being more important. The monaural processing of ENV information is hardly affected by hearing loss or by increasing age. The binaural processing of TFS information, which is important for sound localisation and the binaural masking level difference, is also adversely affected by both hearing loss and increasing age, but here the latter seems more important. The deterioration of binaural TFS processing with increasing age appears to start relatively early in life. The binaural processing of ENV information also deteriorates somewhat with increasing age. The reduced binaural processing abilities found for older/hearing-impaired listeners may partially account for the difficulties that such listeners experience in situations where the target speech and interfering sounds come from different directions in space, as is common in everyday life. PMID:27080640

  7. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up-to-date...

  8. Encoding of Temporal Information by Timing, Rate, and Place in Cat Auditory Cortex

    OpenAIRE

    Imaizumi, Kazuo; Priebe, Nicholas J.; Sharpee, Tatyana O.; Cheung, Steven W.; Schreiner, Christoph E.

    2010-01-01

    A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (...

  9. Electrophysiological evidence for incremental lexical-semantic integration in auditory compound comprehension

    OpenAIRE

    Koester, Dirk; Holle, Henning; Gunter, Thomas C.

    2009-01-01

    The present study investigated the time-course of semantic integration in auditory compound word processing. Compounding is a productive mechanism of word formation that is used frequently in many languages. Specifically, we examined whether semantic integration is incremental or is delayed until the head, the last constituent in German, is available. Stimuli were compounds consisting of three nouns, and the semantic plausibility of the second and the third constituent was manipulated indepen...

  10. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  11. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...

  12. Binding ‘when’ and ‘where’ impairs temporal, but not spatial recall in auditory and visual working memory

    Directory of Open Access Journals (Sweden)

    Franco eDelogu

    2012-03-01

    Full Text Available Information about where and when events happened seem naturally linked to each other, but only few studies have investigated whether and how they are associated in working memory. We tested whether the location of items and their temporal order are jointly or independently encoded. We also verified if spatio-temporal binding is influenced by the sensory modality of items. Participants were requested to memorize the location and/or the serial order of five items (environmental sounds or pictures sequentially presented from five different locations. Next, they were asked to recall either the item location or their order of presentation within the sequence. Attention during encoding was manipulated by contrasting blocks of trials in which participants were requested to encode only one feature, to blocks of trials where they had to encode both features. Results show an interesting interaction between task and attention. Accuracy in the serial order recall was affected by the simultaneous encoding of item location, whereas the recall of item location was unaffected by the concurrent encoding of the serial order of items. This asymmetric influence of attention on the two tasks was similar for the auditory and visual modality. Together, these data indicate that item location is processed in a relatively automatic fashion, whereas maintaining serial order is more demanding in terms of attention. The remarkably analogous results for auditory and visual memory performance, suggest that the binding of serial order and location in working memory is not modality-dependent, and may involve common intersensory mechanisms.

  13. ERPs reveal the temporal dynamics of auditory word recognition in specific language impairment.

    Science.gov (United States)

    Malins, Jeffrey G; Desroches, Amy S; Robertson, Erin K; Newman, Randy Lynn; Archibald, Lisa M D; Joanisse, Marc F

    2013-07-01

    We used event-related potentials (ERPs) to compare auditory word recognition in children with specific language impairment (SLI group; N=14) to a group of typically developing children (TD group; N=14). Subjects were presented with pictures of items and heard auditory words that either matched or mismatched the pictures. Mismatches overlapped expected words in word-onset (cohort mismatches; see: DOLL, hear: dog), rhyme (CONE -bone), or were unrelated (SHELL -mug). In match trials, the SLI group showed a different pattern of N100 responses to auditory stimuli compared to the TD group, indicative of early auditory processing differences in SLI. However, the phonological mapping negativity (PMN) response to mismatching items was comparable across groups, suggesting that just like TD children, children with SLI are capable of establishing phonological expectations and detecting violations of these expectations in an online fashion. Perhaps most importantly, we observed a lack of attenuation of the N400 for rhyming words in the SLI group, which suggests that either these children were not as sensitive to rhyme similarity as their typically developing peers, or did not suppress lexical alternatives to the same extent. These findings help shed light on the underlying deficits responsible for SLI.

  14. HIT, hallucination focused integrative treatment as early intervention in psychotic adolescents with auditory hallucinations : a pilot study

    NARCIS (Netherlands)

    Jenner, JA; van de Willige, G

    2001-01-01

    Objective: Early intervention in psychosis is considered important in relapse prevention. Limited results of monotherapies prompt to development of multimodular programmes. The present study tests feasibility and effectiveness of HIT, an integrative early intervention treatment for auditory hallucin

  15. Visual-Auditory Integration during Speech Imitation in Autism

    Science.gov (United States)

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  16. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  17. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    Science.gov (United States)

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26707975

  18. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  19. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention

    Science.gov (United States)

    Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-01-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  20. The time window of multisensory integration: relating reaction times and judgments of temporal order.

    Science.gov (United States)

    Diederich, Adele; Colonius, Hans

    2015-04-01

    Even though visual and auditory information of 1 and the same event often do not arrive at the sensory receptors at the same time, due to different physical transmission times of the modalities, the brain maintains a unitary perception of the event, at least within a certain range of sensory arrival time differences. The properties of this "temporal window of integration" (TWIN), its recalibration due to task requirements, attention, and other variables, have recently been investigated intensively. Up to now, however, there has been no consistent definition of "temporal window" across different paradigms for measuring its width. Here we propose such a definition based on our TWIN model (Colonius & Diederich, 2004). It applies to judgments of temporal order (or simultaneity) as well as to reaction time (RT) paradigms. Reanalyzing data from Mégevand, Molholm, Nayak, & Foxe (2013) by fitting the TWIN model to data from both paradigms, we confirmed the authors' hypothesis that the temporal window in an RT task tends to be wider than in a temporal-order judgment (TOJ) task. This first step toward a unified concept of TWIN should be a valuable tool in guiding investigations of the neural and cognitive bases of this so-far-somewhat elusive concept. PMID:25706404

  1. 功能磁共振成像观察左颞前部在汉语听觉词加工中的机制%Functional MRI observation on auditory Chinese lexical processing mechanism in left anterior temporal lobe

    Institute of Scientific and Technical Information of China (English)

    王晓怡; 卢洁; 李坤成; 张苗; 徐国庆; 舒华

    2011-01-01

    Objective To explore the neural mechanism for auditory Chinese lexical processing in the left anterior temporal lobe (ATL) of the healthy participants with functional magnetic imaging (fMRI). Methods Fifteen right-handed healthy participants, including 5 males and 10 females, were asked to repeat the auditory words or judge whether the auditory items were semantically dangerous. AFNI was used to process fMRI data and localize functional areas and the difference in the anterior temporal lobe. Results The results revealed the phonological processing on auditory Chinese lexical information was located in the anterior superior temporal gyrus, and the semantic processing was located in the anterior middle temporal gyrus. There existed segregation between the phonological processing and the semantic processing of the auditory Chinese words. Conclusion There was the function of semantic integration in the ATL. Two pathways to semantic access include the direct pathway in the dorsal temporal lobe for repetition task and the indirect in ventral temporal lobe for semantic judgment task.%目的 探讨左侧颞前部在汉语听觉信息加工中的作用机制.方法 应用3.0T磁共振成像系统与标准头线圈对15名健康志愿者(男5名,女10名)进行功能磁共振成像(fMRI).要求受试者完成听觉复述任务和听觉语义危险判断任务.应用软件包AFNI分析两种听觉任务在左颞前部的任务功能定位及其差异.结果 正常成人听觉语义判断任务相比听觉复述任务更多激活左侧颞中回及颞下回前部,而听觉语音复述任务相比听觉语义判断任务更多激活左侧颞上回前部.结论 脑内存在左颞前部对汉语听觉语音语义信息加工的分离,颞上前部对语音分析更强,颞前中下部对语义分析更强.

  2. Spatial interactions determine temporal feature integration as revealed by unmasking

    Directory of Open Access Journals (Sweden)

    Michael H. Herzog

    2006-01-01

    Full Text Available Feature integration is one of the most fundamental problems in neuroscience. In a recent contribution, we showed that a trailing grating can diminish the masking effects one vernier exerts on another, preceding vernier. Here, we show that this temporal unmasking depends on neural spatial interactions related to the trailing grating. Hence, our paradigm allows us to study the spatio-temporal interactions underlying feature integration.

  3. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration

    Directory of Open Access Journals (Sweden)

    Tiziana eVercillo

    2015-05-01

    Full Text Available The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested subjects in an attentional and a non-attentional condition. In the attention experiment participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli (conflictual or not conflictual sounds and vibrations arranged along the horizontal axis were presented sequentially. In the primary task subjects had to evaluate the position of the second stimulus (the probe with respect to the others (in a space bisection task. In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection. Our results showed enhanced auditory precision (and auditory weights in the auditory attentional condition with respect to the control non-attentional condition. Interestingly in both conditions the multisensory results are well predicted by the MLE model. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  4. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  5. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders

    OpenAIRE

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuro-imaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such int...

  6. Temporal resolution of the Florida manatee (Trichechus manatus latirostris) auditory system.

    Science.gov (United States)

    Mann, David A; Colbert, Debborah E; Gaspard, Joseph C; Casper, Brandon M; Cook, Mandy L H; Reep, Roger L; Bauer, Gordon B

    2005-10-01

    Auditory evoked potential (AEP) measurements of two Florida manatees (Trichechus manatus latirostris) were measured in response to amplitude modulated tones. The AEP measurements showed weak responses to test stimuli from 4 kHz to 40 kHz. The manatee modulation rate transfer function (MRTF) is maximally sensitive to 150 and 600 Hz amplitude modulation (AM) rates. The 600 Hz AM rate is midway between the AM sensitivities of terrestrial mammals (chinchillas, gerbils, and humans) (80-150 Hz) and dolphins (1,000-1,200 Hz). Audiograms estimated from the input-output functions of the EPs greatly underestimate behavioral hearing thresholds measured in two other manatees. This underestimation is probably due to the electrodes being located several centimeters from the brain. PMID:16001184

  7. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  8. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding-the inability to see an object when it is surrounded by flankers in the periphery-does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration-the simplest kind of temporal semantic integration-did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  9. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding-the inability to see an object when it is surrounded by flankers in the periphery-does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration-the simplest kind of temporal semantic integration-did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.

  10. Spatio-temporal data analytics for wind energy integration

    CERN Document Server

    Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief presents spatio-temporal data analytics for wind energy integration using stochastic modeling and optimization methods. It explores techniques for efficiently integrating renewable energy generation into bulk power grids. The operational challenges of wind, and its variability are carefully examined. A spatio-temporal analysis approach enables the authors to develop Markov-chain-based short-term forecasts of wind farm power generation. To deal with the wind ramp dynamics, a support vector machine enhanced Markov model is introduced. The stochastic optimization of economic di

  11. Temporal structure and complexity affect audio-visual correspondence detection

    OpenAIRE

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task re...

  12. Temporal structure and complexity affect audio-visual correspondence detection

    OpenAIRE

    Denison, Rachel N.; Jon eDriver; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task re...

  13. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    OpenAIRE

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task re...

  14. Auditory evoked potentials to spectro-temporal modulation of complex tones in normal subjects and patients with severe brain injury.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L; Stokes, M; Munday, R; Haque, N

    2000-05-01

    In order to assess higher auditory processing capabilities, long-latency auditory evoked potentials (AEPs) were recorded to synthesized musical instrument tones in 22 post-comatose patients with severe brain injury causing variably attenuated behavioural responsiveness. On the basis of normative studies, three different types of spectro-temporal modulation were employed. When a continuous 'clarinet' tone changes pitch once every few seconds, N1/P2 potentials are evoked at latencies of approximately 90 and 180 ms, respectively. Their distribution in the fronto-central region is consistent with generators in the supratemporal cortex of both hemispheres. When the pitch is modulated at a much faster rate ( approximately 16 changes/s), responses to each change are virtually abolished but potentials with similar distribution are still elicited by changing the timbre (e.g. 'clarinet' to 'oboe') every few seconds. These responses appear to represent the cortical processes concerned with spectral pattern analysis and the grouping of frequency components to form sound 'objects'. Following a period of 16/s oscillation between two pitches, a more anteriorly distributed negativity is evoked on resumption of a steady pitch. Various lines of evidence suggest that this is probably equivalent to the 'mismatch negativity' (MMN), reflecting a pre-perceptual, memory-based process for detection of change in spectro-temporal sound patterns. This method requires no off-line subtraction of AEPs evoked by the onset of a tone, and the MMN is produced rapidly and robustly with considerably larger amplitude (usually >5 microV) than that to discontinuous pure tones. In the brain-injured patients, the presence of AEPs to two or more complex tone stimuli (in the combined assessment of two authors who were 'blind' to the clinical and behavioural data) was significantly associated with the demonstrable possession of discriminative hearing (the ability to respond differentially to verbal commands

  15. Photonic temporal integrator for all-optical computing.

    Science.gov (United States)

    Slavík, Radan; Park, Yongwoo; Ayotte, Nicolas; Doucet, Serge; Ahn, Tae-Jung; LaRochelle, Sophie; Azaña, José

    2008-10-27

    We report the first experimental realization of an all-optical temporal integrator. The integrator is implemented using an all-fiber active (gain-assisted) filter based on superimposed fiber Bragg gratings made in an Er-Yb co-doped optical fiber that behaves like an 'optical capacitor'. Functionality of this device was tested by integrating different optical pulses, with time duration down to 60 ps, and by integration of two consecutive pulses that had different relative phases, separated by up to 1 ns. The potential of the developed device for implementing all-optical computing systems for solving ordinary differential equations was also experimentally tested. PMID:18958098

  16. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy.

    Science.gov (United States)

    Rocha, Caroline Nunes; Miziara, Carmen Silvia Molleis Galego; Manreza, Maria Luiza Giraldes de; Schochat, Eliane

    2010-02-01

    The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE) for subjects with left mesial temporal sclerosis (LMTS) in relation to the behavioral test-Dichotic Digits Test (DDT), event-related potential (P300), and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.

  17. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  18. Quantifying Auditory Temporal Stability in a Large Database of Recorded Music

    OpenAIRE

    Ellis, Robert J.; Zhiyan Duan; Ye Wang

    2014-01-01

    "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). A...

  19. Species specificity of temporal processing in the auditory midbrain of gray treefrogs: long-interval neurons.

    Science.gov (United States)

    Hanson, Jessica L; Rose, Gary J; Leary, Christopher J; Graham, Jalina A; Alluri, Rishi K; Vasquez-Opazo, Gustavo A

    2016-01-01

    In recently diverged gray treefrogs (Hyla chrysoscelis and H. versicolor), advertisement calls that differ primarily in pulse shape and pulse rate act as an important premating isolation mechanism. Temporally selective neurons in the anuran inferior colliculus may contribute to selective behavioral responses to these calls. Here we present in vivo extracellular and whole-cell recordings from long-interval-selective neurons (LINs) made during presentation of pulses that varied in shape and rate. Whole-cell recordings revealed that interplay between excitation and inhibition shapes long-interval selectivity. LINs in H. versicolor showed greater selectivity for slow-rise pulses, consistent with the slow-rise pulse characteristics of their calls. The steepness of pulse-rate tuning functions, but not the distributions of best pulse rates, differed between the species in a manner that depended on whether pulses had slow or fast-rise shape. When tested with stimuli representing the temporal structure of the advertisement calls of H. chrysoscelis or H. versicolor, approximately 27 % of LINs in H. versicolor responded exclusively to the latter stimulus type. The LINs of H. chrysoscelis were less selective. Encounter calls, which are produced at similar pulse rates in both species (≈5 pulses/s), are likely to be effective stimuli for the LINs of both species. PMID:26614093

  20. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  1. Comparison of LFP-Based and Spike-Based Spectro-Temporal Receptive Fields and Cross-Correlation in Cat Primary Auditory Cortex

    OpenAIRE

    Eggermont, Jos J.; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that ...

  2. Auditory Integration Training: A Double-Blind Study of Behavioral and Electrophysiological Effects in People with Autism.

    Science.gov (United States)

    Edelson, Stephen M.; Arin, Deborah; Bauman, Margaret; Lukas, Scott E.; Rudy, Jane H.; Sholar, Michelle; Rimland, Bernard

    1999-01-01

    Nineteen individuals with autism either listened to auditory integration training processed music or unprocessed music for 20 half-hour sessions. A significant decrease in Aberrant Behavior Checklist Scores was observed in the experimental group at the 30-month follow-up assessment. In addition, three experimental subjects but no controls showed a…

  3. Calcium-dependent control of temporal processing in an auditory interneuron: a computational analysis.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2010-09-01

    Sensitivity to acoustic amplitude modulation in crickets differs between species and depends on carrier frequency (e.g., calling song vs. bat-ultrasound bands). Using computational tools, we explore how Ca(2+)-dependent mechanisms underlying selective attention can contribute to such differences in amplitude modulation sensitivity. For omega neuron 1 (ON1), selective attention is mediated by Ca(2+)-dependent feedback: [Ca(2+)](internal) increases with excitation, activating a Ca(2+)-dependent after-hyperpolarizing current. We propose that Ca(2+) removal rate and the size of the after-hyperpolarizing current can determine ON1's temporal modulation transfer function (TMTF). This is tested using a conductance-based simulation calibrated to responses in vivo. The model shows that parameter values that simulate responses to single pulses are sufficient in simulating responses to modulated stimuli: no special modulation-sensitive mechanisms are necessary, as high and low-pass portions of the TMTF are due to Ca(2+)-dependent spike frequency adaptation and post-synaptic potential depression, respectively. Furthermore, variance in the two biophysical parameters is sufficient to produce TMTFs of varying bandwidth, shifting amplitude modulation sensitivity like that in different species and in response to different carrier frequencies. Thus, the hypothesis that the size of after-hyperpolarizing current and the rate of Ca(2+) removal can affect amplitude modulation sensitivity is computationally validated.

  4. Conditioning the cochlea to facilitate survival and integration of exogenous cells into the auditory epithelium.

    Science.gov (United States)

    Park, Yong-Ho; Wilson, Kevin F; Ueda, Yoshihisa; Tung Wong, Hiu; Beyer, Lisa A; Swiderski, Donald L; Dolan, David F; Raphael, Yehoash

    2014-04-01

    The mammalian auditory epithelium (AE) cannot replace supporting cells and hair cells once they are lost. Therefore, sensorineural hearing loss associated with missing cells is permanent. This inability to regenerate critical cell types makes the AE a potential target for cell replacement therapies such as stem cell transplantation. Inserting stem cells into the AE of deaf ears is a complicated task due to the hostile, high potassium environment of the scala media in the cochlea, and the robust junctional complexes between cells in the AE that resist stem cell integration. Here, we evaluate whether temporarily reducing potassium levels in the scala media and disrupting the junctions in the AE make the cochlear environment more receptive and facilitate survival and integration of transplanted cells. We used sodium caprate to transiently disrupt the AE junctions, replaced endolymph with perilymph, and blocked stria vascularis pumps with furosemide. We determined that these three steps facilitated survival of HeLa cells in the scala media for at least 7 days and that some of the implanted cells formed a junctional contact with native AE cells. The data suggest that manipulation of the cochlear environment facilitates survival and integration of exogenously transplanted HeLa cells in the scala media. PMID:24394296

  5. Modified Temporal Key Integrity Protocol For Efficient Wireless Network Security

    OpenAIRE

    Doomun, M. Razvi; Soyjaudah, KM Sunjiv

    2012-01-01

    Temporal Key Integrity Protocol (TKIP) is the IEEE TaskGroupi solution for the security loop holes present in the already widely deployed 802.11 hardware. It is a set of algorithms that wrap WEP to give the best possible solution given design constraints such as paucity of the CPU cycles, hardwiring of the WEP encryption algorithm and software upgrade dependent. Thus, TKIP is significantly more difficult and challenging to implement and optimise than WEP. The objective of this research is to ...

  6. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling.

  7. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf;

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  8. Medial temporal lobe roles in human path integration.

    Directory of Open Access Journals (Sweden)

    Naohide Yamamoto

    Full Text Available Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  9. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  10. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Miyako

    1988-07-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone.

  11. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  12. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  13. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  14. The effect of spectrally and temporally altered auditory feedback on speech intonation by hard of hearing listeners

    Science.gov (United States)

    Barac-Cikoja, Dragana; Tamaki, Chizuko; Thomas, Lannie

    2003-04-01

    Eight listeners with severe to profound hearing loss read a six-sentence passage under spectrally altered and/or delayed auditory feedback. Spectral manipulation was implemented by filtering the speech signal into either one or four frequency bands, extracting respective amplitude envelope(s), and amplitude-modulating the corresponding noise band(s). Thus, the resulting auditory feedback did not preserve intonation information, although the four-band noise signal remained intelligible. The two noise conditions and the unaltered speech were each tested under the simultaneous and three delayed (50 ms, 100 ms, 200 ms) feedback conditions. Auditory feedback was presented via insert earphones at the listener's most comfortable level. Recorded speech was analyzed for the form and domain of the fundamental frequency (f0) declination, the magnitude of the sentence initial f0 peak (P1), and the fall-rise pattern of f0 at the phrasal boundaries. A significant interaction between the two feedback manipulations was found. Intonation characteristics were affected by speech delay only under the spectrally unaltered feedback: The magnitude of P1 and the slope of the f0 topline both increased with the delay. The spectral smearing diminished the fall-rise pattern within a sentence. Individual differences in the magnitude of these effects were significant.

  15. Assessment and Preservation of Auditory Nerve Integrity in the Deafened Guinea Pig

    NARCIS (Netherlands)

    Ramekers, D.

    2014-01-01

    Profound hearing loss is often caused by cochlear hair cell loss. Cochlear implants (CIs) essentially replace hair cells by encoding sound and conveying the signal by means of pulsatile electrical stimulation to the spiral ganglion cells (SGCs) which form the auditory nerve. SGCs progressively degen

  16. Auditory Hallucinations in Schizophrenia and Nonschizophrenia Populations : A Review and Integrated Model of Cognitive Mechanisms

    NARCIS (Netherlands)

    Waters, Flavie; Allen, Paul; Aleman, Andre; Fernyhough, Charles; Woodward, Todd S.; Badcock, Johanna C.; Barkus, Emma; Johns, Louise; Varese, Filippo; Menon, Mahesh; Vercammen, Ans; Laroi, Frank

    2012-01-01

    While the majority of cognitive studies on auditory hallucinations (AHs) have been conducted in schizophrenia (SZ), an increasing number of researchers are turning their attention to different clinical and nonclinical populations, often using SZ findings as a model for research. Recent advances deri

  17. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  18. Gone in a Flash: Manipulation of Audiovisual Temporal Integration Using Transcranial Magnetic Stimulation

    Directory of Open Access Journals (Sweden)

    Roy eHamilton

    2013-09-01

    Full Text Available While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke, Vieth, Cottrell, and Mattingley (2012, we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams, et al., 2000. Slow repetitive (1Hz TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF, reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  19. Laevo: A Temporal Desktop Interface for Integrated Knowledge Work

    DEFF Research Database (Denmark)

    Jeuris, Steven; Houben, Steven; Bardram, Jakob

    2014-01-01

    Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual configurat...... doubles as a to-do list, incoming interruptions can be handled. Our field study indicates how highlighting the temporal nature of activities results in lightweight scalable activity management, while making users more aware about their ongoing and planned work.......Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual...... configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different...

  20. Conditioning the Cochlea to Facilitate Survival and Integration of Exogenous Cells into the Auditory Epithelium

    OpenAIRE

    Park, Yong-Ho; Wilson, Kevin F.; Ueda, Yoshihisa; Tung Wong, Hiu; Lisa A. Beyer; Donald L. Swiderski; Dolan, David F.; Raphael, Yehoash

    2014-01-01

    The mammalian auditory epithelium (AE) cannot replace supporting cells and hair cells once they are lost. Therefore, sensorineural hearing loss associated with missing cells is permanent. This inability to regenerate critical cell types makes the AE a potential target for cell replacement therapies such as stem cell transplantation. Inserting stem cells into the AE of deaf ears is a complicated task due to the hostile, high potassium environment of the scala media in the cochlea, and the robu...

  1. Long-term music training tunes how the brain temporally binds signals from multiple senses

    OpenAIRE

    Lee, Hweeling; Noppeney, Uta

    2011-01-01

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics–fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisua...

  2. Auditory Processing Disorders

    Science.gov (United States)

    Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...

  3. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM...

  4. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  5. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten;

    2007-01-01

    . In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model....... The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus IC neurons. In contrast...... to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies BMFs of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units...

  6. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  7. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  8. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing. PMID:23626523

  9. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing.

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing.

  10. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  11. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  12. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    Full Text Available Multi-electrode array recordings of spike and local field potential (LFP activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs and 492 frequency-tuning curves (FTCs based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm and the 16-40 Hz LFP (7.4 mm, whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz LFP-pair correlations showed that about 16% (9% of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  13. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Science.gov (United States)

    Eggermont, Jos J; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm) and the 16-40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex. PMID:21625385

  14. A hierarchical nest survival model integrating incomplete temporally varying covariates

    Science.gov (United States)

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  15. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    Directory of Open Access Journals (Sweden)

    Jordi Navarra

    Full Text Available The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1 or from different spatial positions (Experiment 2. A simultaneity judgment task (SJ was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony was obtained using temporal order judgments (TOJs instead of SJs (Experiment 3. Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading that we most frequently encounter in the outside world (e.g., while perceiving distant events.

  16. Temporal aggregation in a periodically integrated autoregressive process

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); H.P. Boswijk (Peter)

    1996-01-01

    textabstractA periodically integrated autoregressive process for a time series which is observed S times per year assumes the presence of S - 1 cointegration relations between the annual series containing the seasonal observations, with the additional feature that these relations are different acros

  17. Event based self-supervised temporal integration for multimodal sensor data.

    Science.gov (United States)

    Barakova, Emilia I; Lourens, Tino

    2005-06-01

    A method for synergistic integration of multimodal sensor data is proposed in this paper. This method is based on two aspects of the integration process: (1) achieving synergistic integration of two or more sensory modalities, and (2) fusing the various information streams at particular moments during processing. Inspired by psychophysical experiments, we propose a self-supervised learning method for achieving synergy with combined representations. Evidence from temporal registration and binding experiments indicates that different cues are processed individually at specific time intervals. Therefore, an event-based temporal co-occurrence principle is proposed for the integration process. This integration method was applied to a mobile robot exploring unfamiliar environments. Simulations showed that integration enhanced route recognition with many perceptual similarities; moreover, they indicate that a perceptual hierarchy of knowledge about instant movement contributes significantly to short-term navigation, but that visual perceptions have bigger impact over longer intervals. PMID:15988800

  18. Event based self-supervised temporal integration for multimodal sensor data.

    Science.gov (United States)

    Barakova, Emilia I; Lourens, Tino

    2005-06-01

    A method for synergistic integration of multimodal sensor data is proposed in this paper. This method is based on two aspects of the integration process: (1) achieving synergistic integration of two or more sensory modalities, and (2) fusing the various information streams at particular moments during processing. Inspired by psychophysical experiments, we propose a self-supervised learning method for achieving synergy with combined representations. Evidence from temporal registration and binding experiments indicates that different cues are processed individually at specific time intervals. Therefore, an event-based temporal co-occurrence principle is proposed for the integration process. This integration method was applied to a mobile robot exploring unfamiliar environments. Simulations showed that integration enhanced route recognition with many perceptual similarities; moreover, they indicate that a perceptual hierarchy of knowledge about instant movement contributes significantly to short-term navigation, but that visual perceptions have bigger impact over longer intervals.

  19. The temporal characteristics of Ca2+ entry through L-type and T-type Ca2+ channels shape exocytosis efficiency in chick auditory hair cells during development.

    Science.gov (United States)

    Levic, Snezana; Dulon, Didier

    2012-12-01

    During development, synaptic exocytosis by cochlear hair cells is first initiated by patterned spontaneous Ca(2+) spikes and, at the onset of hearing, by sound-driven graded depolarizing potentials. The molecular reorganization occurring in the hair cell synaptic machinery during this developmental transition still remains elusive. We characterized the changes in biophysical properties of voltage-gated Ca(2+) currents and exocytosis in developing auditory hair cells of a precocial animal, the domestic chick. We found that immature chick hair cells (embryonic days 10-12) use two types of Ca(2+) currents to control exocytosis: low-voltage-activating, rapidly inactivating (mibefradil sensitive) T-type Ca(2+) currents and high-voltage-activating, noninactivating (nifedipine sensitive) L-type currents. Exocytosis evoked by T-type Ca(2+) current displayed a fast release component (RRP) but lacked the slow sustained release component (SRP), suggesting an inefficient recruitment of distant synaptic vesicles by this transient Ca(2+) current. With maturation, the participation of L-type Ca(2+) currents to exocytosis largely increased, inducing a highly Ca(2+) efficient recruitment of an RRP and an SRP component. Notably, L-type-driven exocytosis in immature hair cells displayed higher Ca(2+) efficiency when triggered by prerecorded native action potentials than by voltage steps, whereas similar efficiency for both protocols was found in mature hair cells. This difference likely reflects a tighter coupling between release sites and Ca(2+) channels in mature hair cells. Overall, our results suggest that the temporal characteristics of Ca(2+) entry through T-type and L-type Ca(2+) channels greatly influence synaptic release by hair cells during cochlear development.

  20. Knowledge-data integration for temporal reasoning in a clinical trial system.

    Science.gov (United States)

    O'Connor, Martin J; Shankar, Ravi D; Parrish, David B; Das, Amar K

    2009-04-01

    Managing time-stamped data is essential to clinical research activities and often requires the use of considerable domain knowledge. Adequately representing and integrating temporal data and domain knowledge is difficult with the database technologies used in most clinical research systems. There is often a disconnect between the database representation of research data and corresponding domain knowledge of clinical research concepts. In this paper, we present a set of methodologies for undertaking ontology-based specification of temporal information, and discuss their application to the verification of protocol-specific temporal constraints among clinical trial activities. Our approach allows knowledge-level temporal constraints to be evaluated against operational trial data stored in relational databases. We show how the Semantic Web ontology and rule languages OWL and SWRL, respectively, can support tools for research data management that automatically integrate low-level representations of relational data with high-level domain concepts used in study design. PMID:18789876

  1. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study

    Science.gov (United States)

    Kumar, G. Vinodh; Halder, Tamesh; Jaiswal, Amit K.; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus

  2. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  3. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  4. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  5. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  6. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  7. Temporal integration of multisensory stimuli in autism spectrum disorder: a predictive coding perspective.

    Science.gov (United States)

    Chan, Jason S; Langer, Anne; Kaiser, Jochen

    2016-08-01

    Recently, a growing number of studies have examined the role of multisensory temporal integration in people with autism spectrum disorder (ASD). Some studies have used temporal order judgments or simultaneity judgments to examine the temporal binding window, while others have employed multisensory illusions, such as the sound-induced flash illusion (SiFi). The SiFi is an illusion created by presenting two beeps along with one flash. Participants perceive two flashes if the stimulus-onset asynchrony (SOA) between the two flashes is brief. The temporal binding window can be measured by modulating the SOA between the beeps. Each of these tasks has been used to compare the temporal binding window in people with ASD and typically developing individuals; however, the results have been mixed. While temporal order and simultaneity judgment tasks have shown little temporal binding window differences between groups, studies using the SiFi have found a wider temporal binding window in ASD compared to controls. In this paper, we discuss these seemingly contradictory findings and suggest that predictive coding may be able to explain the differences between these tasks. PMID:27324803

  8. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    Science.gov (United States)

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.

  9. Parameters affecting temporal resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    Science.gov (United States)

    Mor, I.; Vartsky, D.; Dangendorf, V.; Bar, D.; Feldman, G.; Goldberg, M. B.; Tittelmeier, K.; Bromberger, B.; Brandis, M.; Weierganz, M.

    2013-11-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the En = 1-10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system.

  10. Parameters Affecting Temporal Resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    CERN Document Server

    Mor, I; Dangendorf, V; Bar, D; Feldman, G; Goldberg, M B; Tittelmeier, K; Bromberger, B; Brandis, M; Weierganz, M

    2013-01-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the En = 1-10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation. This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system.

  11. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. PMID:27003546

  12. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.

  13. Auditory Neuropathy

    Science.gov (United States)

    ... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...

  14. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  15. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  16. Non-monotonic Temporal-Weighting Indicates a Dynamically Modulated Evidence-Integration Mechanism.

    Directory of Open Access Journals (Sweden)

    Zohar Z Bronfman

    2016-02-01

    Full Text Available Perceptual decisions are thought to be mediated by a mechanism of sequential sampling and integration of noisy evidence whose temporal weighting profile affects the decision quality. To examine temporal weighting, participants were presented with two brightness-fluctuating disks for 1, 2 or 3 seconds and were requested to choose the overall brighter disk at the end of each trial. By employing a signal-perturbation method, which deploys across trials a set of systematically controlled temporal dispersions of the same overall signal, we were able to quantify the participants' temporal weighting profile. Results indicate that, for intervals of 1 or 2 sec, participants exhibit a primacy-bias. However, for longer stimuli (3-sec the temporal weighting profile is non-monotonic, with concurrent primacy and recency, which is inconsistent with the predictions of previously suggested computational models of perceptual decision-making (drift-diffusion and Ornstein-Uhlenbeck processes. We propose a novel, dynamic variant of the leaky-competing accumulator model as a potential account for this finding, and we discuss potential neural mechanisms.

  17. Non-monotonic Temporal-Weighting Indicates a Dynamically Modulated Evidence-Integration Mechanism.

    Science.gov (United States)

    Bronfman, Zohar Z; Brezis, Noam; Usher, Marius

    2016-02-01

    Perceptual decisions are thought to be mediated by a mechanism of sequential sampling and integration of noisy evidence whose temporal weighting profile affects the decision quality. To examine temporal weighting, participants were presented with two brightness-fluctuating disks for 1, 2 or 3 seconds and were requested to choose the overall brighter disk at the end of each trial. By employing a signal-perturbation method, which deploys across trials a set of systematically controlled temporal dispersions of the same overall signal, we were able to quantify the participants' temporal weighting profile. Results indicate that, for intervals of 1 or 2 sec, participants exhibit a primacy-bias. However, for longer stimuli (3-sec) the temporal weighting profile is non-monotonic, with concurrent primacy and recency, which is inconsistent with the predictions of previously suggested computational models of perceptual decision-making (drift-diffusion and Ornstein-Uhlenbeck processes). We propose a novel, dynamic variant of the leaky-competing accumulator model as a potential account for this finding, and we discuss potential neural mechanisms. PMID:26866598

  18. Temporal integration of loudness in listeners with hearing losses of primarily cochlear origin

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1999-01-01

    To investigate how hearing loss of primarily cochlear origin affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level for 15 listeners with cochlear impairments and for seven age-matched controls. Three frequencies, usually 0.5, 1, and 4......-frequency hearing losses (slopes >50 dB/octave) showed larger-than-normal maximal amounts of temporal integration (40 to 50 dB). This finding is consistent with the shallow loudness functions predicted by our excitation-pattern model for impaired listeners [, in Modeling Sensorineural Hearing Loss, edited by W....... Jesteadt (Erlbaum, Mahwah, NJ, 1997), pp. 187–198]. Loudness functions derived from impaired listeners' temporal-integration functions indicate that restoration of loudness in listeners with cochlear hearing loss usually will require the same gain whether the sound is short or long. ©1999 Acoustical...

  19. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music

    OpenAIRE

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. ...

  20. Recording of electrically evoked auditory brainstem responses (E-ABR) with an integrated stimulus generator in Matlab.

    Science.gov (United States)

    Bahmer, Andreas; Peter, Otto; Baumann, Uwe

    2008-08-30

    Electrical auditory brainstem responses (E-ABRs) of subjects with cochlear implants are used for monitoring the physiologic responses of early signal processing of the auditory system. Additionally, E-ABR measurements allow the diagnosis of retro-cochlear diseases. Therefore, E-ABR should be available in every cochlear implant center as a diagnostic tool. In this paper, we introduce a low-cost setup designed to perform an E-ABR as well as a conventional ABR for research purposes. The distributable form was developed with Matlab and the Matlab Compiler (The Mathworks Inc.). For the ABR, only a PC with a soundcard, conventional system headphones, and an EEG pre-amplifier are necessary; for E-ABR, in addition, an interface to the cochlea implant is required. For our purposes, we implemented an interface for the Combi 40+/Pulsar implant (MED-EL, Innsbruck).

  1. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  2. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  3. The temporal binding window for audiovisual speech: Children are like little adults.

    Science.gov (United States)

    Hillock-Dunn, Andrea; Grantham, D Wesley; Wallace, Mark T

    2016-07-29

    During a typical communication exchange, both auditory and visual cues contribute to speech comprehension. The influence of vision on speech perception can be measured behaviorally using a task where incongruent auditory and visual speech stimuli are paired to induce perception of a novel token reflective of multisensory integration (i.e., the McGurk effect). This effect is temporally constrained in adults, with illusion perception decreasing as the temporal offset between the auditory and visual stimuli increases. Here, we used the McGurk effect to investigate the development of the temporal characteristics of audiovisual speech binding in 7-24 year-olds. Surprisingly, results indicated that although older participants perceived the McGurk illusion more frequently, no age-dependent change in the temporal boundaries of audiovisual speech binding was observed. PMID:26920938

  4. Integrity of medial temporal structures may predict better improvement of spatial neglect with prism adaptation treatment.

    Science.gov (United States)

    Chen, Peii; Goedert, Kelly M; Shah, Priyanka; Foundas, Anne L; Barrett, A M

    2014-09-01

    Prism adaptation treatment (PAT) is a promising rehabilitative method for functional recovery in persons with spatial neglect. Previous research suggests that PAT improves motor-intentional "aiming" deficits that frequently occur with frontal lesions. To test whether presence of frontal lesions predicted better improvement of spatial neglect after PAT, the current study evaluated neglect-specific improvement in functional activities (assessment with the Catherine Bergego Scale) over time in 21 right-brain-damaged stroke survivors with left-sided spatial neglect. The results demonstrated that neglect patients' functional activities improved after two weeks of PAT and continued improving for four weeks. Such functional improvement did not occur equally in all of the participants: Neglect patients with lesions involving the frontal cortex (n = 13) experienced significantly better functional improvement than did those without frontal lesions (n = 8). More importantly, voxel-based lesion-behavior mapping (VLBM) revealed that in comparison to the group of patients without frontal lesions, the frontal-lesioned neglect patients had intact regions in the medial temporal areas, the superior temporal areas, and the inferior longitudinal fasciculus. The medial cortical and subcortical areas in the temporal lobe were especially distinguished in the "frontal lesion" group. The findings suggest that the integrity of medial temporal structures may play an important role in supporting functional improvement after PAT.

  5. [Towards an integrated approach to infantile autism: the superior temporal lobe between neurosciences and psychoanalysis].

    Science.gov (United States)

    Golse, Bernard; Robel, Laurence

    2009-02-01

    The superior temporal lobe is currently at the focus of intensive research in infantile autism, a psychopathologic disorder apparently representing the severest failure of access to intersubjectivity, i.e. the ability to accept that others exist independently of oneself. Access to intersubjectivity seems to involve the superior temporal lobe, which is the seat of several relevant functions such as face and voice recognition and perception of others' movements, and coordinates the different sensory inputs that identify an object as being "external". The psychoanalytic approach to infantile autism and recent cognitive data are now converging, and intersubjectivity is considered to result from "mantling" or comodalization of sensory inputs from external objects. Recent brain neuroimaging studies point to anatomic and functional abnormalities of the superior temporal lobe in autistic children. Dialogue is therefore possible between these different disciplines, opening the way to an integrated view of infantile autism in which the superior temporal lobe holds a central place--not necessarily as a primary cause of autism but rather as an intermediary or a reflection of autistic functioning

  6. Gradient sensitivity to acoustic detail and temporal integration of phonetic cues

    Science.gov (United States)

    McMurray, Bob; Clayards, Meghan A.; Aslin, Richard N.; Tanenhaus, Michael K.

    2001-05-01

    Speech contains systematic covariation at the subphonemic level that could be used to integrate information over time (McMurray et al., 2003; Gow, 2001). Previous research has established sensitivity to this variation: activation for lexical competitors is sensitive to within-category variation in voice-onset-time (McMurray et al., 2002). This study extends this investigation to other subphonemic speech cues by examining formant transitions (r/l and d/g), formant slope (b/w) and VOT (b/p) in an eye-tracking paradigm similar to McMurray et al. (2002). Vowel length was also varied to examine temporal organization (e.g., VOT precedes the vowel). Subjects heard a token from each continua and selected the target from a screen containing pictures of the target, competitor and unrelated items. Fixations to the competitor increased with distance from the boundary along each of the speech continua. Unlike prior work, there was also an effect on fixations to the target. There was no effect of vowel length on the d/g or r/l continua, but rate dependent continua (b/w and b/p) showed length effects. Importantly, the temporal order of cues was reflected in the pattern of looks to competitors, providing an important window into the processes by which acoustic detail is temporally integrated.

  7. Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes.

    Science.gov (United States)

    Davey, James; Thompson, Hannah E; Hallam, Glyn; Karapanagiotidis, Theodoros; Murphy, Charlotte; De Caso, Irene; Krieger-Redwood, Katya; Bernhardt, Boris C; Smallwood, Jonathan; Jefferies, Elizabeth

    2016-08-15

    Making sense of the world around us depends upon selectively retrieving information relevant to our current goal or context. However, it is unclear whether selective semantic retrieval relies exclusively on general control mechanisms recruited in demanding non-semantic tasks, or instead on systems specialised for the control of meaning. One hypothesis is that the left posterior middle temporal gyrus (pMTG) is important in the controlled retrieval of semantic (not non-semantic) information; however this view remains controversial since a parallel literature links this site to event and relational semantics. In a functional neuroimaging study, we demonstrated that an area of pMTG implicated in semantic control by a recent meta-analysis was activated in a conjunction of (i) semantic association over size judgements and (ii) action over colour feature matching. Under these circumstances the same region showed functional coupling with the inferior frontal gyrus - another crucial site for semantic control. Structural and functional connectivity analyses demonstrated that this site is at the nexus of networks recruited in automatic semantic processing (the default mode network) and executively demanding tasks (the multiple-demand network). Moreover, in both task and task-free contexts, pMTG exhibited functional properties that were more similar to ventral parts of inferior frontal cortex, implicated in controlled semantic retrieval, than more dorsal inferior frontal sulcus, implicated in domain-general control. Finally, the pMTG region was functionally correlated at rest with other regions implicated in control-demanding semantic tasks, including inferior frontal gyrus and intraparietal sulcus. We suggest that pMTG may play a crucial role within a large-scale network that allows the integration of automatic retrieval in the default mode network with executively-demanding goal-oriented cognition, and that this could support our ability to understand actions and non

  8. Integration of smartphones and webcam for the measure of spatio-temporal gait parameters.

    Science.gov (United States)

    Barone, V; Maranesi, E; Fioretti, S

    2014-01-01

    A very low cost prototype has been made for the spatial and temporal analysis of human movement using an integrated system of last generation smartphones and a highdefinition webcam, controlled by a laptop. The system can be used to analyze mainly planar motions in non-structured environments. In this paper, the accelerometer signal as captured by the 3D sensor embedded in one smartphone, and the position of colored markers derived by the webcam frames, are used for the computation of spatial-temporal parameters of gait. Accuracy of results is compared with that obtainable by a gold-standard instrumentation. The system is characterized by a very low cost and by a very high level of automation. It has been thought to be used by non-expert users in ambulatory settings. PMID:25571351

  9. Integrating Temporal and Spectral Features of Astronomical Data Using Wavelet Analysis for Source Classification

    CERN Document Server

    Ukwatta, T N

    2016-01-01

    Temporal and spectral information extracted from a stream of photons received from astronomical sources is the foundation on which we build understanding of various objects and processes in the Universe. Typically astronomers fit a number of models separately to light curves and spectra to extract relevant features. These features are then used to classify, identify, and understand the nature of the sources. However, these feature extraction methods may not be optimally sensitive to unknown properties of light curves and spectra. One can use the raw light curves and spectra as features to train classifiers, but this typically increases the dimensionality of the problem, often by several orders of magnitude. We overcome this problem by integrating light curves and spectra to create an abstract image and using wavelet analysis to extract important features from the image. Such features incorporate both temporal and spectral properties of the astronomical data. Classification is then performed on those abstract ...

  10. Music perception and cognition following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Tramo, M J; Bharucha, J J; Musiek, F E

    1990-01-01

    We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250-8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies

  11. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. Daytime Sleepiness Is Associated With Reduced Integration of Temporally Distant Outcomes on the Iowa Gambling Task.

    Science.gov (United States)

    Olson, Elizabeth A; Weber, Mareen; Rauch, Scott L; Killgore, William D S

    2016-01-01

    Sleep deprivation is associated with performance decrements on some measures of executive functioning. For instance, sleep deprivation results in altered decision making on the Iowa Gambling Task. However, it is unclear which component processes of the task may be driving the effect. In this study, Iowa Gambling task performance was decomposed using the Expectancy-Valence model. Recent sleep debt and greater daytime sleepiness were associated with higher scores on the updating parameter, which reflects the extent to which recent experiences are emphasized over remote ones. Findings suggest that the effects of insufficient sleep on IGT performance are due to shortening of the time horizon over which decisions are integrated. These findings may have clinical implications in that individuals with sleep problems may not integrate more temporally distant information when making decisions.

  14. Temporal structure and complexity affect audio-visual correspondence detection

    Directory of Open Access Journals (Sweden)

    Rachel N Denison

    2013-01-01

    Full Text Available Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration.

  15. Convergent validity of the Integrated Visual and Auditory Continuous Performance Test (IVA+Plus): associations with working memory, processing speed, and behavioral ratings.

    Science.gov (United States)

    Arble, Eamonn; Kuentzel, Jeffrey; Barnett, Douglas

    2014-05-01

    Though the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus) is commonly used by researchers and clinicians, few investigations have assessed its convergent and discriminant validity, especially with regard to its use with children. The present study details correlates of the IVA + Plus using measures of cognitive ability and ratings of child behavior (parent and teacher), drawing upon a sample of 90 psychoeducational evaluations. Scores from the IVA + Plus correlated significantly with the Working Memory and Processing Speed Indexes from the Fourth Edition of the Wechsler Intelligence Scales for Children (WISC-IV), though fewer and weaker significant correlations were seen with behavior ratings scales, and significant associations also occurred with WISC-IV Verbal Comprehension and Perceptual Reasoning. The overall pattern of relations is supportive of the validity of the IVA + Plus; however, general cognitive ability was associated with better performance on most of the primary scores of the IVA + Plus, suggesting that interpretation should take intelligence into account.

  16. Estimation of Spatially and Temporally Varied Groundwater Recharge from Precipitation Using a Systematic and Integrated Approach

    Science.gov (United States)

    Wang, M.

    2006-05-01

    Quantitative determination of spatially and temporally varied groundwater recharge from precipitation is a complex issue involving many control factors, and investigators face great challenges for quantifying the relationship between groundwater recharge and its control factors. In fact, its quantification is a complex process in which unstructured decisions are generally involved. The Analytic Hierarchy Process (AHP) is a systematic method for a powerful and flexible decision making to determine priorities and make the best decision when both qualitative and quantitative aspects of a decision need to be accounted for. Moreover, through a process of reducing complex decisions to a series of one-on-one comparisons, then synthesizing the results, the rationale can clearly be understood for making the best decision. In this study, a systematic and integrated approach for estimation of spatially and temporally varied groundwater recharge from precipitation is proposed, in which the remote sensing, GIS, AHP, and modeling techniques are coupled. A case study is presented for demonstration of its application. Based on field survey and information analyses, the pertinent factors for groundwater recharge are assessed and the dominating factors are identified. An analytical model is then established for estimation of the spatially and temporally varied groundwater recharge from precipitation in which the contribution potentials to groundwater recharge and relative weights of those dominating factors are taken into account. The contribution potentials can be assessed by adopting fuzzy membership functions and integrating expert opinions. The weight for each of the dominating factors can systematically be determined through coupling of the RS, GIS, and AHP techniques. To reduce model uncertainty, this model should be further calibrated systematically and validated using even limited groundwater field data such as observed groundwater heads and groundwater discharges into

  17. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  18. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis generat

  19. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model adaptation. Single-frame pose recovery consists of a hypothesis generat

  20. In search of an auditory engram

    Science.gov (United States)

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory. PMID:15967995

  1. Nonretinotopic perception of orientation: Temporal integration of basic features operates in object-based coordinates.

    Science.gov (United States)

    Wutz, Andreas; Drewes, Jan; Melcher, David

    2016-08-01

    Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve object-based or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three experiments supporting nonretinotopic feature integration even for the most paradigmatic example of retinotopically-defined features: orientation. We presented observers with a repeated series of T-P displays in which the perceived rotation of Gabor gratings indicates processing in either retinotopic or object-based coordinates. In Experiment 1, the frequency of perceived retinotopic rotations decreased exponentially for longer interstimulus intervals (ISIs) between T-P display frames, with object-based percepts dominating after about 150-250 ms. In a second experiment, we show that motion and rotation judgments depend on the perception of a moving object during the T-P display ISIs rather than only on temporal factors. In Experiment 3, we cued the observers' attentional state either toward a retinotopic or object motion-based reference frame and then tracked both the observers' eye position and the time course of the perceptual bias while viewing identical T-P display sequences. Overall, we report novel evidence for spatiotemporal integration of even basic visual features such as orientation in nonretinotopic coordinates, in order to support perceptual constancy across self- and object motion.

  2. Temporal integration of interaural-time differences carried by trains of 500-Hz tone bursts

    Science.gov (United States)

    Akeroyd, Michael A.

    2001-05-01

    The just-noticeable difference for the interaural time difference (ITD) of a sound generally reduces as its duration is increased [e.g., T. Houtgast and R. Plomp, J. Acoust. Soc. Am. 44, 807-812 (1968); E. Hafter and R. H. Dye, J. Acoust. Soc. Am. 73, 644-651 (1983)]. The rate of this temporal integration is less than would be expected from a simple multiple-looks application of signal detection theory, as though one part of the sound is given greater weighting. The present experiment measured the contributions of the start, middle, or end of a sound to the detectability of its ITD. The stimulus was a 32-pip train of 500-Hz, 10-ms pips, diotic apart from some (2-16) target pips which carried the ITD to be detected and which were placed either at the start, middle, or end of the train. Integration rates were determined from psychometric functions measured using four normal-hearing listeners. The results showed that the rate was least for targets placed at the start or end but larger for targets at the middle. The data can be described using a weighted multiple-looks approach, based on an integration function of approximately 100-ms time constant together with an emphasis of the start, and end, of the sound.

  3. ePRISM: A case study in multiple proxy and mixed temporal resolution integration

    Science.gov (United States)

    Robinson, Marci M.; Dowsett, Harry J.

    2010-01-01

    As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.

  4. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  5. Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory

    Science.gov (United States)

    Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L.; Miller, John W.; Voets, Natalie L.; Saindane, Amit M.; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel

    2012-01-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre-and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  6. Famous face identification in temporal lobe epilepsy: support for a multimodal integration model of semantic memory.

    Science.gov (United States)

    Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel

    2013-06-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  7. On the relations among temporal integration for loudness, loudness discrimination, and the form of the loudness function. (A)

    DEFF Research Database (Denmark)

    Poulsen, Torben; Buus, Søren; Florentine, M

    1996-01-01

    of two equal-duration tones, they do not appear to depend on duration. The level dependence of temporal integration and the loudness jnds are consistent with a loudness function [log(loudness) versus SPL] that is flatter at moderate levels than at low and high levels. [Work supported by NIH-NIDCD R01DC...

  8. Frequency band-importance functions for auditory and auditory-visual speech recognition

    Science.gov (United States)

    Grant, Ken W.

    2005-04-01

    In many everyday listening environments, speech communication involves the integration of both acoustic and visual speech cues. This is especially true in noisy and reverberant environments where the speech signal is highly degraded, or when the listener has a hearing impairment. Understanding the mechanisms involved in auditory-visual integration is a primary interest of this work. Of particular interest is whether listeners are able to allocate their attention to various frequency regions of the speech signal differently under auditory-visual conditions and auditory-alone conditions. For auditory speech recognition, the most important frequency regions tend to be around 1500-3000 Hz, corresponding roughly to important acoustic cues for place of articulation. The purpose of this study is to determine the most important frequency region under auditory-visual speech conditions. Frequency band-importance functions for auditory and auditory-visual conditions were obtained by having subjects identify speech tokens under conditions where the speech-to-noise ratio of different parts of the speech spectrum is independently and randomly varied on every trial. Point biserial correlations were computed for each separate spectral region and the normalized correlations are interpreted as weights indicating the importance of each region. Relations among frequency-importance functions for auditory and auditory-visual conditions will be discussed.

  9. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  10. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W.; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  11. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. PMID:26915331

  12. Time-filtered leapfrog integration of Maxwell equations using unstaggered temporal grids

    Science.gov (United States)

    Mahalov, A.; Moustaoui, M.

    2016-11-01

    A finite-difference time-domain method for integration of Maxwell equations is presented. The computational algorithm is based on the leapfrog time stepping scheme with unstaggered temporal grids. It uses a fourth-order implicit time filter that reduces computational modes and fourth-order finite difference approximations for spatial derivatives. The method can be applied within both staggered and collocated spatial grids. It has the advantage of allowing explicit treatment of terms involving electric current density and application of selective numerical smoothing which can be used to smooth out errors generated by finite differencing. In addition, the method does not require iteration of the electric constitutive relation in nonlinear electromagnetic propagation problems. The numerical method is shown to be effective and stable when employed within Perfectly Matched Layers (PML). Stability analysis demonstrates that the proposed method is effective in stabilizing and controlling numerical instabilities of computational modes arising in wave propagation problems with physical damping and artificial smoothing terms while maintaining higher accuracy for the physical modes. Comparison of simulation results obtained from the proposed method and those computed by the classical time filtered leapfrog, where Maxwell equations are integrated for a lossy medium, within PML regions and for Kerr-nonlinear media show that the proposed method is robust and accurate. The performance of the computational algorithm is also verified by analyzing parametric four wave mixing in an optical nonlinear Kerr medium. The algorithm is found to accurately predict frequencies and amplitudes of nonlinearly converted waves under realistic conditions proposed in the literature.

  13. INTEGRAL study of temporal properties of bright flares in Supergiant Fast X-ray Transients

    CERN Document Server

    Sidoli, L; Postnov, K

    2016-01-01

    We have characterized the typical temporal behaviour of the bright X-ray flares detected from the three Supergiant Fast X-ray Transients showing the most extreme transient behaviour (XTEJ1739-302, IGRJ17544-2619, SAXJ1818.6-1703). We focus here on the cumulative distributions of the waiting-time (time interval between two consecutive X-ray flares), and the duration of the hard X-ray activity (duration of the brightest phase of an SFXT outburst), as observed by INTEGRAL/IBIS in the energy band 17-50 keV. Adopting the cumulative distribution of waiting-times, it is possible to identify the typical timescale that clearly separates different outbursts, each composed by several single flares at ks timescale. This allowed us to measure the duration of the brightest phase of the outbursts from these three targets, finding that they show heavy-tailed cumulative distributions. We observe a correlation between the total energy emitted during SFXT outbursts and the time interval covered by the outbursts (defined as the ...

  14. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  15. Role of DARPP-32 and ARPP-21 in the Emergence of Temporal Constraints on Striatal Calcium and Dopamine Integration.

    Science.gov (United States)

    Nair, Anu G; Bhalla, Upinder S; Hellgren Kotaleski, Jeanette

    2016-09-01

    In reward learning, the integration of NMDA-dependent calcium and dopamine by striatal projection neurons leads to potentiation of corticostriatal synapses through CaMKII/PP1 signaling. In order to elicit the CaMKII/PP1-dependent response, the calcium and dopamine inputs should arrive in temporal proximity and must follow a specific (dopamine after calcium) order. However, little is known about the cellular mechanism which enforces these temporal constraints on the signal integration. In this computational study, we propose that these temporal requirements emerge as a result of the coordinated signaling via two striatal phosphoproteins, DARPP-32 and ARPP-21. Specifically, DARPP-32-mediated signaling could implement an input-interval dependent gating function, via transient PP1 inhibition, thus enforcing the requirement for temporal proximity. Furthermore, ARPP-21 signaling could impose the additional input-order requirement of calcium and dopamine, due to its Ca2+/calmodulin sequestering property when dopamine arrives first. This highlights the possible role of phosphoproteins in the temporal aspects of striatal signal transduction. PMID:27584878

  16. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy Avaliação eletrofisiológica e comportamental da audição em individuos com epilepsia em lobo temporal esquerdo

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha

    2010-02-01

    Full Text Available The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE for subjects with left mesial temporal sclerosis (LMTS in relation to the behavioral test-Dichotic Digits Test (DDT, event-related potential (P300, and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.O objetivo deste estudo foi determinar a repercussão da epilepsia de lobo temporal esquerdo (LTE em indivíduos com esclerose mesial temporal esquerda (EMTE em relação à avaliação auditiva comportamental-Teste Dicótico de Dígitos (TDD, ao Potencial Evocado Auditivo de Longa Latência (P300 e comparar o P300 do lobo temporal esquerdo e direito. Estudamos 12 indivíduos com EMTE (grupo estudo e 12 indivíduos controle com desenvolvimento típico. Analisamos as relações entre a latência e amplitude do P300, obtidos nas posições C3A1,C3A2,C4A1 e C4A2 e os resultados obtidos no TDD. No TDD, o grupo estudo apresentou pior desempenho em relação ao grupo controle, sendo esta diferença estatisticamente significante em ambas as orelhas. Para o P300, observamos que em seis indivíduos com EMTE o potencial foi ausente. Para a latência e amplitude, verificamos que estes

  17. Joint Recognition and Segmentation of Actions via Probabilistic Integration of Spatio-Temporal Fisher Vectors

    OpenAIRE

    Carvajal, Johanna; McCool, Chris; Lovell, Brian; Sanderson, Conrad

    2016-01-01

    We propose a hierarchical approach to multi-action recognition that performs joint classification and segmentation. A given video (containing several consecutive actions) is processed via a sequence of overlapping temporal windows. Each frame in a temporal window is represented through selective low-level spatio-temporal features which efficiently capture relevant local dynamics. Features from each window are represented as a Fisher vector, which captures first and second order statistics. In...

  18. Improving Depiction of Temporal Bone Anatomy With Low-Radiation Dose CT by an Integrated Circuit Detector in Pediatric Patients

    Science.gov (United States)

    He, Jingzhen; Zu, Yuliang; Wang, Qing; Ma, Xiangxing

    2014-01-01

    Abstract The purpose of this study was to determine the performance of low-dose computed tomography (CT) scanning with integrated circuit (IC) detector in defining fine structures of temporal bone in children by comparing with the conventional detector. The study was performed with the approval of our institutional review board and the patients’ anonymity was maintained. A total of 86 children  0.05). The low-dose CT images acquired with the IC detector provide better depiction of fine osseous structures of temporal bone than that with the conventional DC detector. PMID:25526489

  19. Postnatal development of synaptic properties of the GABAergic projection from the inferior colliculus to the auditory thalamus

    OpenAIRE

    Venkataraman, Yamini; Bartlett, Edward L.

    2013-01-01

    The development of auditory temporal processing is important for processing complex sounds as well as for acquiring reading and language skills. Neuronal properties and sound processing change dramatically in auditory cortex neurons after the onset of hearing. However, the development of the auditory thalamus or medial geniculate body (MGB) has not been well studied over this critical time window. Since synaptic inhibition has been shown to be crucial for auditory temporal processing, this st...

  20. Bilateral collicular interaction: modulation of auditory signal processing in frequency domain.

    Science.gov (United States)

    Cheng, L; Mei, H-X; Tang, J; Fu, Z-Y; Jen, P H-S; Chen, Q-C

    2013-04-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from a variety of lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in the modulation of frequency-domain signal processing of mice using electrophysiological recording and focal electrical stimulation. Focal electrical stimulation of neurons in one IC produces widespread inhibition and focused facilitation of responses of neurons in the other IC. This bilateral collicular interaction decreases the response magnitude and lengthens the response latency of inhibited IC neurons but produces an opposite effect on the response of facilitated IC neurons. In the frequency domain, the focal electrical stimulation of one IC sharpens or expands the frequency tuning curves (FTCs) of neurons in the other IC to improve frequency sensitivity and the frequency response range. The focal electrical stimulation also produces a shift in the best frequency (BF) of modulated IC (ICMdu) neurons toward that of electrically stimulated IC (ICES) neurons. The degree of bilateral collicular interaction is dependent upon the difference in the BF between the ICES neurons and ICMdu neurons. These data suggest that bilateral collicular interaction is a part of dynamic acoustic signal processing that adjusts and improves signal processing as well as reorganizes collicular representation of signal parameters according to the acoustic experience.

  1. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  2. Reality of auditory verbal hallucinations

    Science.gov (United States)

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  3. Task-dependent calibration of auditory spatial perception through environmental visual observation

    OpenAIRE

    Luca Brayda

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio tas...

  4. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate sour...

  5. Representing Representation: Integration between the Temporal Lobe and the Posterior Cingulate Influences the Content and Form of Spontaneous Thought.

    Directory of Open Access Journals (Sweden)

    Jonathan Smallwood

    Full Text Available When not engaged in the moment, we often spontaneously represent people, places and events that are not present in the environment. Although this capacity has been linked to the default mode network (DMN, it remains unclear how interactions between the nodes of this network give rise to particular mental experiences during spontaneous thought. One hypothesis is that the core of the DMN integrates information from medial and lateral temporal lobe memory systems, which represent different aspects of knowledge. Individual differences in the connectivity between temporal lobe regions and the default mode network core would then predict differences in the content and form of people's spontaneous thoughts. This study tested this hypothesis by examining the relationship between seed-based functional connectivity and the contents of spontaneous thought recorded in a laboratory study several days later. Variations in connectivity from both medial and lateral temporal lobe regions was associated with different patterns of spontaneous thought and these effects converged on an overlapping region in the posterior cingulate cortex. We propose that the posterior core of the DMN acts as a representational hub that integrates information represented in medial and lateral temporal lobe and this process is important in determining the content and form of spontaneous thought.

  6. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    Directory of Open Access Journals (Sweden)

    Vibhakar C Kotak

    2015-08-01

    Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.

  7. Regional heavy metal pollution in crops by integrating physiological function variability with spatio-temporal stability using multi-temporal thermal remote sensing

    Science.gov (United States)

    Liu, Meiling; Liu, Xiangnan; Zhang, Biyao; Ding, Chao

    2016-09-01

    Heavy metal stress in crops is characterized by stability in space and time, which differs from other stressors that are typically more transient (e.g., drought, pests/diseases, and mismanagement). The objective of this study is to assess regional heavy metal stress in rice by integrating physiological function variability with spatio-temporal stability based on multi-temporal thermal infrared (TIR) remote sensing images. The field in which the experiment was conducted is located in Zhuzhou City, Hunan Province, China. HJ-1B images and in-situ measured data were collected from rice growing in heavy metal contaminated soils. A stress index (SI) was devised as an indicator for the degree of heavy metal stress of the rice in different growth stages, and a time-spectrum feature space (TSFS) model was used to determine rice heavy metal stress levels. The results indicate that (i) SI is a good indicator of rice damage caused by heavy metal stress. Minimum values of SI occur in rice subject to high pollution, followed by larger SI with medium pollution and maximum SI for low pollution, for the same growth stage. (ii) SI shows some variation for different growth stages of rice, and the minimum SI occurs at the flowering stage. (iii) The TSFS model is successful at identifying rice heavy metal stress, and stress levels in rice stabilized regardless of the model being applied in the two different years. This study suggests that regional heavy metal stress in crops can be accurately detected using TIR technology, if a sensitive indicator of crop physiological function impairment is used and an effective model is selected. A combination of spectrum and spatio-temporal information appears to be a very promising method for monitoring crops with various stressors.

  8. The Global Food Price Crisis and China-World Rice Market Integration: A Spatial-Temporal Rational Expectations Equilibrium Model

    OpenAIRE

    Liu, Xianglin; Romero-Aguilar, Randall S.; Chen, Shu-Ling; Miranda, Mario J.

    2013-01-01

    In this paper, we examine how China, the world’s largest rice producer and consumer, would affect the international rice market if it liberalized its trade in rice and became more fully integrated into the global rice market. The impacts of trade liberalization are estimated using a spatial-temporal rational expectations model of the world rice market characterized by four interdependent markets with stochastic production patterns, constant-elasticity demands, expected-profit maximizing priva...

  9. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  10. Lateralization of auditory-cortex functions.

    Science.gov (United States)

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding. PMID:14629926

  11. Subsymmetries predict auditory and visual pattern complexity.

    Science.gov (United States)

    Toussaint, Godfried T; Beltran, Juan F

    2013-01-01

    A mathematical measure of pattern complexity based on subsymmetries possessed by the pattern, previously shown to correlate highly with empirically derived measures of cognitive complexity in the visual domain, is found to also correlate significantly with empirically derived complexity measures of perception and production of auditory temporal and musical rhythmic patterns. Not only does the subsymmetry measure correlate highly with the difficulty of reproducing the rhythms by tapping after listening to them, but also the empirical measures exhibit similar behavior, for both the visual and auditory patterns, as a function of the relative number of subsymmetries present in the patterns. PMID:24494441

  12. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  13. A neurophysiological deficit in early visual processing in schizophrenia patients with auditory hallucinations.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E; Kroppmann, Christopher J; Alschuler, Daniel M; Fekri, Shiva; Gil, Roberto; Jarskog, L Fredrik; Harkavy-Friedman, Jill M; Bruder, Gerard E

    2012-09-01

    Existing 67-channel event-related potentials, obtained during recognition and working memory paradigms with words or faces, were used to examine early visual processing in schizophrenia patients prone to auditory hallucinations (AH, n = 26) or not (NH, n = 49) and healthy controls (HC, n = 46). Current source density (CSD) transforms revealed distinct, strongly left- (words) or right-lateralized (faces; N170) inferior-temporal N1 sinks (150 ms) in each group. N1 was quantified by temporal PCA of peak-adjusted CSDs. For words and faces in both paradigms, N1 was substantially reduced in AH compared with NH and HC, who did not differ from each other. The difference in N1 between AH and NH was not due to overall symptom severity or performance accuracy, with both groups showing comparable memory deficits. Our findings extend prior reports of reduced auditory N1 in AH, suggesting a broader early perceptual integration deficit that is not limited to the auditory modality.

  14. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    Science.gov (United States)

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. PMID:24388435

  15. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    KAUST Repository

    Bayoumi, Maged Fouad

    2014-10-06

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.

  16. Integrating active sensing into reactive synthesis with temporal logic constraints under partial observations

    OpenAIRE

    Fu, Jie; Topcu, Ufuk

    2014-01-01

    We introduce the notion of online reactive planning with sensing actions for systems with temporal logic constraints in partially observable and dynamic environments. With incomplete information on the dynamic environment, reactive controller synthesis amounts to solving a two-player game with partial observations, which has impractically computational complexity. To alleviate the high computational burden, online replanning via sensing actions avoids solving the strategy in the reactive syst...

  17. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...... filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high...

  18. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  19. A model of auditory nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    peripheral for the cathodic phase. This results in an average difference of 200 μs in spike latency for AP generated by anodic vs cathodic pulses. It is hypothesized here that this difference is large enough to corrupt the temporal coding in the AN. To quantify effects of pulse polarity on auditory......, fail to correctly predict responses to anodic stimulation. This study presents a model that simulates AN responses to anodic and cathodic stimulation. The main goal was to account for the data obtained with monophasic electrical stimulation in cat AN. The model is based on an exponential integrate......-and-fire neuron with two partitions responding individually to anodic and cathodic stimulation. Membrane noise was parameterized based on reported relative spread of AN neurons. Firing efficiency curves and spike-latency distributions were simulated for monophasic and symmetric biphasic stimulation...

  20. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  1. Integrated Use of Multi-temporal SAR and Optical Satellite Imagery for Crop Mapping in Ukraine

    Science.gov (United States)

    Lavreniuk, M. S.; Kussul, N.; Skakun, S.

    2014-12-01

    Information on location and spatial distribution of crops is extremely important within many applications such as crop area estimation, crop yield forecasting and environmental impact analysis [1-2]. Synthetic-aperture radar (SAR) instruments on board remote sensing satellites offer unique features to imaging crops due to their all weather capabilities and ability to capture crop characteristics not available by optical instruments. This abstract aims to explore feasibility and the use of multi-temporal multi-polarization SAR images along with multi-temporal optical images to crop classification in Ukraine using a neural network ensemble. The study area included a JECAM test site in Ukraine which is a part of the Global Agriculture Monitoring (GEOGLAM) initiative. Six optical images were acquired by Landsat-8, and twelve SAR images were acquired by Radarsat-2 (six in FQ8W mode with angle 28 deg., and FQ20W with angle 40 deg.) over the study region. Optical images were atmospherically corrected. SAR images were filtered for speckle, and converted to backscatter coefficients. Ground truth data on crop type (274 polygons) were collected during the summer of 2013. In order to perform supervised classification of multi-temporal satellite imagery, an ensemble of neural networks, in particular multi-layer perceptrons (MLPs), was used. The use of the ensemble allowed us to improve overall (OA) classification accuracy from +0.1% to +2% comparing to an individual network. Adding multi-temporal SAR images to multi-temporal optical images improved both OA and individual class accuracies, in particular for sunflower (gains up to +25.9%), soybeans (+16.2%), and maize (+6.2%). It was also found that better OA can be obtained using shallow angle (FQ20W, 40°) OA=77% over steeper angle (FQ8W, 28°) OA=71.78%. 1. F. Kogan et al., "Winter wheat yield forecasting in Ukraine based on Earth observation, meteorological data and biophysical models," Int. J. Appl. Earth Observ. Geoinform

  2. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  3. Visual discrimination of delayed self-generated movement reveals the temporal limit of proprioceptive-visual intermodal integration.

    Science.gov (United States)

    Jaime, Mark; O'Driscoll, Kelly; Moore, Chris

    2016-07-01

    This study examined the intermodal integration of visual-proprioceptive feedback via a novel visual discrimination task of delayed self-generated movement. Participants performed a goal-oriented task in which visual feedback was available only via delayed videos displayed on two monitors-each with different delay durations. During task performance, delay duration was varied for one of the videos in the pair relative to a standard delay, which was held constant. Participants were required to identify and use the video with the lesser delay to perform the task. Visual discrimination of the lesser-delayed video was examined under four conditions in which the standard delay was increased for each condition. A temporal limit for proprioceptive-visual intermodal integration of 3-5s was revealed by subjects' inability to reliably discriminate video pairs.

  4. Visual discrimination of delayed self-generated movement reveals the temporal limit of proprioceptive-visual intermodal integration.

    Science.gov (United States)

    Jaime, Mark; O'Driscoll, Kelly; Moore, Chris

    2016-07-01

    This study examined the intermodal integration of visual-proprioceptive feedback via a novel visual discrimination task of delayed self-generated movement. Participants performed a goal-oriented task in which visual feedback was available only via delayed videos displayed on two monitors-each with different delay durations. During task performance, delay duration was varied for one of the videos in the pair relative to a standard delay, which was held constant. Participants were required to identify and use the video with the lesser delay to perform the task. Visual discrimination of the lesser-delayed video was examined under four conditions in which the standard delay was increased for each condition. A temporal limit for proprioceptive-visual intermodal integration of 3-5s was revealed by subjects' inability to reliably discriminate video pairs. PMID:27208649

  5. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    Science.gov (United States)

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  6. Evidence for Integrated Visual Face and Body Representations in the Anterior Temporal Lobes.

    Science.gov (United States)

    Harry, Bronson B; Umla-Runge, Katja; Lawrence, Andrew D; Graham, Kim S; Downing, Paul E

    2016-08-01

    Research on visual face perception has revealed a region in the ventral anterior temporal lobes, often referred to as the anterior temporal face patch (ATFP), which responds strongly to images of faces. To date, the selectivity of the ATFP has been examined by contrasting responses to faces against a small selection of categories. Here, we assess the selectivity of the ATFP in humans with a broad range of visual control stimuli to provide a stronger test of face selectivity in this region. In Experiment 1, participants viewed images from 20 stimulus categories in an event-related fMRI design. Faces evoked more activity than all other 19 categories in the left ATFP. In the right ATFP, equally strong responses were observed for both faces and headless bodies. To pursue this unexpected finding, in Experiment 2, we used multivoxel pattern analysis to examine whether the strong response to face and body stimuli reflects a common coding of both classes or instead overlapping but distinct representations. On a voxel-by-voxel basis, face and whole-body responses were significantly positively correlated in the right ATFP, but face and body-part responses were not. This finding suggests that there is shared neural coding of faces and whole bodies in the right ATFP that does not extend to individual body parts. In contrast, the same approach revealed distinct face and body representations in the right fusiform gyrus. These results are indicative of an increasing convergence of distinct sources of person-related perceptual information proceeding from the posterior to the anterior temporal cortex. PMID:27054399

  7. Integrated single grating compressor for variable pulse front tilt in simultaneously spatially and temporally focused systems.

    Science.gov (United States)

    Block, Erica; Thomas, Jens; Durfee, Charles; Squier, Jeff

    2014-12-15

    A Ti:Al(3)O(2) multipass chirped pulse amplification system is outfitted with a single-grating, simultaneous spatial and temporal focusing (SSTF) compressor platform. For the first time, this novel design has the ability to easily vary the beam aspect ratio of an SSTF beam, and thus the degree of pulse-front tilt at focus, while maintaining a net zero-dispersion system. Accessible variation of pulse front tilt gives full spatiotemporal control over the intensity distribution at the focus and could lead to better understanding of effects such as nonreciprocal writing and SSTF-material interactions. PMID:25503029

  8. Superior temporal activation in response to dynamic audio-visual emotional cues

    OpenAIRE

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2008-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audiovisual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual cues. Emotion perception research has focused on static facial cues; however, dynamic audiovisual (AV) cues mimic real-world social cues more accura...

  9. Parameters Affecting Temporal Resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    OpenAIRE

    Mor, I.; Vartsky, D.; Dangendorf, V.; Bar, D.; Feldman, G.; Goldberg, M B; Tittelmeier, K.; Bromberger, B.; Brandis, M.; Weierganz, M.

    2013-01-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the En = 1-10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window correspondi...

  10. Characterizing functional integrity: intraindividual brain signal variability predicts memory performance in patients with medial temporal lobe epilepsy.

    Science.gov (United States)

    Protzner, Andrea B; Kovacevic, Natasa; Cohn, Melanie; McAndrews, Mary Pat

    2013-06-01

    Computational modeling suggests that variability in brain signals provides important information regarding the system's capacity to adopt different network configurations that may promote optimal responding to stimuli. Although there is limited empirical work on this construct, a recent study indicates that age-related decreases in variability across the adult lifespan correlate with less efficient and less accurate performance. Here, we extend this construct to the assessment of cerebral integrity by comparing fMRI BOLD variability and fMRI BOLD amplitude in their ability to account for differences in functional capacity in patients with focal unilateral medial temporal dysfunction. We were specifically interested in whether either of these BOLD measures could identify a link between the affected medial temporal region and memory performance (as measured by a clinical test of verbal memory retention). Using partial least-squares analyses, we found that variability in a set of regions including the left hippocampus predicted verbal retention and, furthermore, this relationship was similar across a range of cognitive tasks measured during scanning (i.e., the same pattern was seen in fixation, autobiographical recall, and word generation). In contrast, signal amplitude in the hippocampus did not predict memory performance, even for a task that reliably activates the medial temporal lobes (i.e., autobiographical recall). These findings provide a powerful validation of the concept that variability in brain signals reflects functional integrity. Furthermore, this measure can be characterized as a robust biomarker in this clinical setting because it reveals the same pattern regardless of cognitive challenge or task engagement during scanning.

  11. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  12. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    was correlated with self-report outcome. However, overall the predictive leverage of the various measures was moderate, with single predictors explaining only up to 19 percent of the variance in the auditory-performance measures. a)Now at CNBH, Department of Physiology, Development and Neuroscience, University...... no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...

  13. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, A.L.

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  14. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  15. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  16. From 3D to 4D: Integration of temporal information into CT angiography studies.

    Science.gov (United States)

    Haubenreisser, Holger; Bigdeli, Amir; Meyer, Mathias; Kremer, Thomas; Riester, Thomas; Kneser, Ulrich; Schoenberg, Stefan O; Henzler, Thomas

    2015-12-01

    CT angiography is the current clinical standard for the imaging many vascular illnesses. This is traditionally done with a single arterial contrast phase. However, advances in CT technology allow for a dynamic acquisition of the contrast bolus, thus adding temporal information to the examination. The aim of this article is to highlight the clinical possibilities of dynamic CTA using 2 examples. The accuracy of the detection and quantification of stenosis in patients with peripheral arterial occlusive disease, especially in stadium III and IV, is significantly improved when performing dynamic CTA examinations. The post-interventional follow-up of examinations of EVAR benefit from dynamic information, allowing for a higher sensitivity and specificity, as well as allowing more accurate classification of potential endoleaks. The described radiation dose for these dynamic examinations is low, but this can be further optimized by using lower tube voltages. There are a multitude of applications for dynamic CTA that need to be further explored in future studies.

  17. Integration of various data sources for transient groundwater modeling with spatio-temporally variable fluxes—Sardon study case, Spain

    Science.gov (United States)

    Lubczynski, Maciek W.; Gurwin, Jacek

    2005-05-01

    Spatio-temporal variability of recharge ( R) and groundwater evapotranspiration ( ETg) fluxes in a granite Sardon catchment in Spain (˜80 km 2) have been assessed based on integration of various data sources and methods within the numerical groundwater MODFLOW model. The data sources and methods included: remote sensing solution of surface energy balance using satellite data, sap flow measurements, chloride mass balance, automated monitoring of climate, depth to groundwater table and river discharges, 1D reservoir modeling, GIS modeling, field cartography and aerial photo interpretation, slug and pumping tests, resistivity, electromagnetic and magnetic resonance soundings. The presented study case provides not only detailed evaluation of the complexity of spatio-temporal variable fluxes, but also a complete and generic methodology of modern data acquisition and data integration in transient groundwater modeling for spatio-temporal groundwater balancing. The calibrated numerical model showed spatially variable patterns of R and ETg fluxes despite a uniform rainfall pattern. The seasonal variability of fluxes indicated: (1) R in the range of 0.3-0.5 mm/d within ˜8 months of the wet season with exceptional peaks as high as 0.9 mm/d in January and February and no recharge in July and August; (2) a year round stable lateral groundwater outflow ( Qg) in the range of 0.08-0.24 mm/d; (3) ETg=0.64, 0.80, 0.55 mm/d in the dry seasons of 1997, 1998, 1999, respectively, and <0.05 mm/d in wet seasons; (4) temporally variable aquifer storage, which gains water in wet seasons shortly after rain showers and looses water in dry seasons mainly due to groundwater evapotranspiration. The dry season sap flow measurements of tree transpiration performed in the homogenous stands of Quercus ilex and Quercus pyrenaica indicated flux rates of 0.40 and 0.15 mm/d, respectively. The dry season tree transpiration for the entire catchment was ˜0.16 mm/d. The availability of dry season

  18. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  19. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  20. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  1. Integrating Real-time and Manual Monitored Soil Moisture Data to Predict Hillslope Soil Moisture Variations with High Temporal Resolutions

    Science.gov (United States)

    Zhu, Qing; Lv, Ligang; Zhou, Zhiwen; Liao, Kaihua

    2016-04-01

    Spatial-temporal variability of soil moisture 15 has been remaining an challenge to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time soil moisture monitoring methods. This restricted the comprehensive and intensive examination of soil moisture dynamics. In this study, we aimed to integrate the manual and real-time monitored soil moisture to depict the hillslope dynamics of soil moisture with good spatial coverage and temporal resolution. Linear (stepwise multiple linear regression-SMLR) and non-linear models (support vector machines-SVM) were used to predict soil moisture at 38 manual sites (collected 1-2 times per month) with soil moisture automatically collected at three real-time monitoring sites (collected every 5 mins). By comparing the accuracies of SMLR and SVM for each manual site, optimal soil moisture prediction model of this site was then determined. Results show that soil moisture at these 38 manual sites can be reliably predicted (root mean square errorswetness index, profile curvature, and relative difference of soil moisture and its standard deviation influenced the selection of prediction model since they related to the dynamics of soil water distribution and movement. By using this approach, hillslope soil moisture spatial distributions at un-sampled times and dates were predicted after a typical rainfall event. Missing information of hillslope soil moisture dynamics was then acquired successfully. This can be benefit for determining the hot spots and moments of soil water movement, as well as designing the proper soil moisture monitoring plan at the field scale.

  2. Effect of auditory integration training on the cognitive function in elderly with mild cognitive impairment%听觉统合训练对轻度认知功能障碍老人认知能力的影响

    Institute of Scientific and Technical Information of China (English)

    毛晓红; 魏秀红

    2012-01-01

    目的 探讨听觉统合训练对轻度认知功能障碍(MCI)老人认知能力的影响.方法 将入选的60例60~75岁的MCI老人随机分成训练组30例和对照组30例.训练组每天上午9:00~9:30在课题组人员指导下进行训练,每周6d,每天0.5h,持续6个月;对照组不进行训练干预.6个月后用基本认知能力测验软件进行组内及组间对照,评价两组老人认知功能变化情况.结果 6个月后基本认知能力测验中数字快速拷贝、汉字快速比较、心算答案回忆方面训练组优于对照组(P<0.05),训练组干预后优于干预前(P<0.05).结论 听觉统合训练可改善MCI老人的认知功能.%Objective To evaluate the effect of auditory integration training on the cognitive function in elderly with mild cognitive impairment(MCI). Methods Sixty elderly aged 60 to 75 years old with MCI were randomly divided into the experimental group and the control group. The elderly in the experimental group received auditory integration training for half an hour every day for six months. The patients' cognitive function was assessed by Basic Cognitive Ability Test before and after training. Results Six months after training,the patients' performance on digital copy,Chinese word comparison,mental arithmetic answer memories in the basic cognitive ability test were significantly higher in the experimental group than those of the control group (P<0.05). Conclusions Auditory integration training can improve the cognitive function of older people with MCI.

  3. Luminance and opponent-color contributions to visual detection and adaptation and to temporal and spatial integration.

    Science.gov (United States)

    King-Smith, P E; Carden, D

    1976-07-01

    We show how the processes of visual detection and of temporal and spatial summation may be analyzed in terms of parallel luminance (achromatic) and opponent-color systems; a test flash is detected if it exceeds the threshold of either system. The spectral sensitivity of the luminance system may be determined by a flicker method, and has a single broad peak near 555 nm; the spectral sensitivity of the opponent-color system corresponds to the color recognition threshold, and has three peaks at about 440, 530, and 600 nm (on a white background). The temporal and spatial integration of the opponent-color system are generally greater than for the luminance system; further, a white background selectively depresses the sensitivity of the luminance system relative to the opponent-color system. Thus relatively large (1 degree) and long (200 msec) spectral test flashes on a white background are detected by the opponent-color system except near 570 nm; the contribution of the luminance system becomes more prominent if the size or duration of the test flash is reduced, or if the white background is extinguished. The present analysis is discussed in relation to Stiles' model of independent eta mechanisms. PMID:978286

  4. An integrated approach for potato crop intensification using temporal remote sensing data

    Science.gov (United States)

    Panigrahy, S.; Chakraborty, M.

    Temporal remote sensing data, along with soil, physiographic rainfall and temperature information, were modelled to derive a potato-growing environment index. This was used to identify the agriculture areas suitable for growing potato crops in the Bardhaman district, West Bengal, India. The crop rotation information derived from multi-data Indian Remote Sensing Satellite (IRS) LISS-I data were used to characterize the present utilization pattern of the identified area. The result showed that around 99,000 ha agricultural area was suitable for growing potato crops. At present, only 36-38% of this area is under potato cultivation. Around 15% of the area goes to summer rice, and is thus not available for potato crop cultivation. The rest of the identified area, amounting to around 45,000 ha (46-48%), lies fallow after the harvest of kharif rice. This area thus, if given priority under the potato crop intensification programme initiated by the Government of West Bengal, has a high probability of success.

  5. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  6. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    Science.gov (United States)

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  7. [Central auditory prosthesis].

    Science.gov (United States)

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  8. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  9. Bilateral Collicular Interaction: Modulation of Auditory Signal Processing in Amplitude Domain

    Science.gov (United States)

    Fu, Zi-Ying; Wang, Xin; Jen, Philip H.-S.; Chen, Qi-Cai

    2012-01-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from many lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and from the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in modulating amplitude-domain signal processing using electrophysiological recording, acoustic and focal electrical stimulation. Focal electrical stimulation of one (ipsilateral) IC produces widespread inhibition (61.6%) and focused facilitation (9.1%) of responses of neurons in the other (contralateral) IC, while 29.3% of the neurons were not affected. Bilateral collicular interaction produces a decrease in the response magnitude and an increase in the response latency of inhibited IC neurons but produces opposite effects on the response of facilitated IC neurons. These two groups of neurons are not separately located and are tonotopically organized within the IC. The modulation effect is most effective at low sound level and is dependent upon the interval between the acoustic and electric stimuli. The focal electrical stimulation of the ipsilateral IC compresses or expands the rate-level functions of contralateral IC neurons. The focal electrical stimulation also produces a shift in the minimum threshold and dynamic range of contralateral IC neurons for as long as 150 minutes. The degree of bilateral collicular interaction is dependent upon the difference in the best frequency between the electrically stimulated IC neurons and modulated IC neurons. These data suggest that bilateral collicular interaction mainly changes the ratio between excitation and inhibition during signal processing so as to sharpen the amplitude sensitivity of IC neurons. Bilateral interaction may be also involved in acoustic

  10. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  11. Quantifying the spatio-temporal dynamics of woody plant encroachment using an integrative remote sensing, GIS, and spatial modeling approach

    Science.gov (United States)

    Buenemann, Michaela

    Despite a longstanding universal concern about and intensive research into woody plant encroachment (WPE)---the replacement of grasslands by shrub- and woodlands---our accumulated understanding of the process has either not been translated into sustainable rangeland management strategies or with only limited success. In order to increase our scientific insights into WPE, move us one step closer toward the sustainable management of rangelands affected by or vulnerable to the process, and identify needs for a future global research agenda, this dissertation presents an unprecedented critical, qualitative and quantitative assessment of the existing literature on the topic and evaluates the utility of an integrative remote sensing, GIS, and spatial modeling approach for quantifying the spatio-temporal dynamics of WPE. Findings from this research suggest that gaps in our current understanding of WPE and difficulties in devising sustainable rangeland management strategies are in part due to the complex spatio-temporal web of interactions between geoecological and anthropogenic variables involved in the process as well as limitations of presently available data and techniques. However, an in-depth analysis of the published literature also reveals that aforementioned problems are caused by two further crucial factors: the absence of information acquisition and reporting standards and the relative lack of long-term, large-scale, multi-disciplinary research efforts. The methodological framework proposed in this dissertation yields data that are easily standardized according to various criteria and facilitates the integration of spatially explicit data generated by a variety of studies. This framework may thus provide one common ground for scientists from a diversity of fields. Also, it has utility for both research and management. Specifically, this research demonstrates that the application of cutting-edge remote sensing techniques (Multiple Endmember Spectral Mixture

  12. 不同面孔表情时间整合的时程效应%Time Course Effects of Temporal Integration in Different Facial Expression Recognition

    Institute of Scientific and Technical Information of China (English)

    陈本友; 黄希庭

    2012-01-01

    时间整合是指相继呈现的序列事件被加工为一个整体表征,通过把完整的面孔表情分割成3部分,按照不同的时间间隔相继呈现,考察面孔表情时间整合的时程效果.结果发现:①时间整合的时程(SOA)要受到刺激材料的意义性的影响,面孔表情的时间整合的时程要长于面孔.②面孔表情的时间整合时程也存在类型差异,喜悦表情的时间整合时程要明显长于愤怒和悲伤2种表情.%Temporal integration refers to the process of perception processing, by which the successively separated stimuli are combined into a significant representation. In this study, the time course of temporal integration was examined in different facial expressions, by which each of three whole facial expression pictures was segmented into three parts, each including a salient facial feature: eye, nose or mouth; and then these parts were presented sequentially to the participants by different interval duration. The results showed inversion effects in that the time course of temporal integration (SOA) was influenced by stimulus feature, and the time course of temporal integration in facial expression was longer than in face, and in that a type difference was existed in the time course of temporal integration in facial expression, and the time course of temporal integration in happy expression was longer than that in angry or sad expression.

  13. Integrated approach for understanding spatio-temporal changes in forest resource distribution in the central Himalaya

    Institute of Scientific and Technical Information of China (English)

    A K Joshi; P K Joshi; T Chauhan; Brijmohan Bairwa

    2014-01-01

    Intense anthropogenic exploitation has altered distribution of forest resources. This change was analyzed using visual interpretation of satellite data of 1979, 1999 and 2009. Field and interactive social surveys were conducted to identify spatial trends in forest degradation and data were mapped on forest cover and land use maps. Perceptions of villagers were compiled in a pictorial representation to understand changes in forest resource distribution in central Himalaya from 1970 to 2010. For-ested areas were subject to degradation and isolation due to loss of con-necting forest stands. Species like Lantana camara and Eupatorium adenophorum invaded forest landscapes. Intensity of human pressure differed by forest type and elevation. An integrated approach is needed to monitor forest resource distribution and disturbance.

  14. Integrating environmental equity, energy and sustainability: A spatial-temporal study of electric power generation

    Science.gov (United States)

    Touche, George Earl

    The theoretical scope of this dissertation encompasses the ecological factors of equity and energy. Literature important to environmental justice and sustainability are reviewed, and a general integration of global concepts is delineated. The conceptual framework includes ecological integrity, quality human development, intra- and inter-generational equity and risk originating from human economic activity and modern energy production. The empirical focus of this study concentrates on environmental equity and electric power generation within the United States. Several designs are employed while using paired t-tests, independent t-tests, zero-order correlation coefficients and regression coefficients to test seven sets of hypotheses. Examinations are conducted at the census tract level within Texas and at the state level across the United States. At the community level within Texas, communities that host coal or natural gas utility power plants and corresponding comparison communities that do not host such power plants are tested for compositional differences. Comparisons are made both before and after the power plants began operating for purposes of assessing outcomes of the siting process and impacts of the power plants. Relationships between the compositions of the hosting communities and the risks and benefits originating from the observed power plants are also examined. At the statewide level across the United States, relationships between statewide composition variables and risks and benefits originating from statewide electric power generation are examined. Findings indicate the existence of some limited environmental inequities, but they do not indicate disparities that confirm the general thesis of environmental racism put forth by environmental justice advocates. Although environmental justice strategies that would utilize Title VI of the 1964 Civil Rights Act and the disparate impact standard do not appear to be applicable, some findings suggest potential

  15. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  16. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  17. Multimodal integration of time.

    Science.gov (United States)

    Bausenhart, Karin M; de la Rosa, Maria Dolores; Ulrich, Rolf

    2014-01-01

    Recent studies suggest that the accuracy of duration discrimination for visually presented intervals is strongly impaired by concurrently presented auditory intervals of different duration, but not vice versa. Because these studies rely mostly on accuracy measures, it remains unclear whether this impairment results from changes in perceived duration or rather from a decrease in perceptual sensitivity. We therefore assessed complete psychometric functions in a duration discrimination task to disentangle effects on perceived duration and sensitivity. Specifically, participants compared two empty intervals marked by either visual or auditory pulses. These pulses were either presented unimodally, or accompanied by task-irrelevant pulses in the respective other modality, which defined conflicting intervals of identical, shorter, or longer duration. Participants were instructed to base their temporal judgments solely on the task-relevant modality. Despite this instruction, perceived duration was clearly biased toward the duration of the intervals marked in the task-irrelevant modality. This was not only found for the discrimination of visual intervals, but also, to a lesser extent, for the discrimination of auditory intervals. Discrimination sensitivity, however, was similar between all multimodal conditions, and only improved compared to the presentation of unimodal visual intervals. In a second experiment, evidence for multisensory integration was even found when the task-irrelevant modality did not contain any duration information, thus excluding noncompliant attention allocation as a basis of our results. Our results thus suggest that audiovisual integration of temporally discrepant signals does not impair discrimination sensitivity but rather alters perceived duration, presumably by means of a temporal ventriloquism effect. PMID:24351985

  18. Assessing temporal uncertainties in integrated groundwater management: an opportunity for change?

    Science.gov (United States)

    Anglade, J. A.; Billen, G.; Garnier, J.

    2013-12-01

    Since the early 1990's, high levels of nitrates concentration (occasionally exceeding the European drinking standard of 50 mgNO3-/l) have been recorded in the borewells supplying Auxerres's 60.000 inhabitants water requirements. The water catchment area (86 km2) is located in a rural area dedicated to field crops production in intensive cereal farming systems based on massive inputs of synthetic fertilizers. In 1998, a co-management committee comprising Auxerre City, rural municipalities located in the water catchment area, consumers and farmers, was created as a forward-looking associative structure to achieve integrated, adaptive and sustainable management of the resource. In 2002, 18 years after the first signs of water quality degradation, multiparty negotiation led to a cooperative agreement, a contribution to assist farmers toward new practices (optimized application of fertilizers, catch crops, and buffer strips) in a form of a surcharge on consumers' water bills. The management strategy initially integrated and operating on a voluntary basis, did not rapidly deliver its promises (there was no significant decrease in the nitrates concentration). It evolved into a combination of short term palliative solutions, contractual and regulatory instruments with higher requirements. The establishment of a regulatory framework caused major tensions between stakeholders that brought about a feeling of discouragement and a lack of understanding as to the absence of results on water quality after 20 years of joint actions. At this point, the urban-rural solidarity was in danger in being undermined, so the time issue, i.e the delay between agricultural pressure changes and visible effects on water quality, was scientifically addressed and communicated to all the parties involved. First, water age dating analysis through CFC and SF6 (anthropic gas) coupled with a statistical long term analysis of agricultural evolutions revealed a residence time in the Sequanian limestones

  19. Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Potton, Anita; Birtles, Deidre; Frostick, Caroline; Moore, Derek G.

    2013-01-01

    The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants. PMID:23882240

  20. Integrated remote sensing for multi-temporal analysis of urban land cover-climate interactions

    Science.gov (United States)

    Savastru, Dan M.; Zoran, Maria A.; Savastru, Roxana S.

    2016-08-01

    Climate change is considered to be the biggest environmental threat in the future in the South- Eastern part of Europe. In frame of predicted global warming, urban climate is an important issue in scientific research. Surface energy processes have an essential role in urban weather, climate and hydrosphere cycles, as well in urban heat redistribution. This paper investigated the influences of urban growth on thermal environment in relationship with other biophysical variables in Bucharest metropolitan area of Romania. Remote sensing data from Landsat TM/ETM+ and time series MODIS Terra/Aqua sensors have been used to assess urban land cover- climate interactions over period between 2000 and 2015 years. Vegetation abundances and percent impervious surfaces were derived by means of linear spectral mixture model, and a method for effectively enhancing impervious surface has been developed to accurately examine the urban growth. The land surface temperature (Ts), a key parameter for urban thermal characteristics analysis, was also analyzed in relation with the Normalized Difference Vegetation Index (NDVI) at city level. Based on these parameters, the urban growth, and urban heat island effect (UHI) and the relationships of Ts to other biophysical parameters have been analyzed. The correlation analyses revealed that, at the pixel-scale, Ts possessed a strong positive correlation with percent impervious surfaces and negative correlation with vegetation abundances at the regional scale, respectively. This analysis provided an integrated research scheme and the findings can be very useful for urban ecosystem modeling.

  1. Allen Brain Atlas: an integrated spatio-temporal portal for exploring the central nervous system.

    Science.gov (United States)

    Sunkin, Susan M; Ng, Lydia; Lau, Chris; Dolbeare, Tim; Gilbert, Terri L; Thompson, Carol L; Hawrylycz, Michael; Dang, Chinh

    2013-01-01

    The Allen Brain Atlas (http://www.brain-map.org) provides a unique online public resource integrating extensive gene expression data, connectivity data and neuroanatomical information with powerful search and viewing tools for the adult and developing brain in mouse, human and non-human primate. Here, we review the resources available at the Allen Brain Atlas, describing each product and data type [such as in situ hybridization (ISH) and supporting histology, microarray, RNA sequencing, reference atlases, projection mapping and magnetic resonance imaging]. In addition, standardized and unique features in the web applications are described that enable users to search and mine the various data sets. Features include both simple and sophisticated methods for gene searches, colorimetric and fluorescent ISH image viewers, graphical displays of ISH, microarray and RNA sequencing data, Brain Explorer software for 3D navigation of anatomy and gene expression, and an interactive reference atlas viewer. In addition, cross data set searches enable users to query multiple Allen Brain Atlas data sets simultaneously. All of the Allen Brain Atlas resources can be accessed through the Allen Brain Atlas data portal.

  2. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults.

    Science.gov (United States)

    Anderson, Samira; Jenkins, Kimberly

    2015-11-01

    Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy. PMID:27587912

  3. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  4. Concentric scheme of monkey auditory cortex

    Science.gov (United States)

    Kosaki, Hiroko; Saunders, Richard C.; Mishkin, Mortimer

    2003-04-01

    The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.

  5. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  6. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    David eJenson

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  7. The Temporal Dynamics of Feature Integration for Color, form, and Motion

    Directory of Open Access Journals (Sweden)

    KS Pilz

    2012-07-01

    Full Text Available When two similar visual stimuli are presented in rapid succession, only their fused image is perceived, without having conscious access to the single stimuli. Such feature fusion occurs both for color (eg, Efron1973 and form (eg, Scharnowski et al 2007. For verniers, the fusion process lasts for more than 400 ms, as has been shown using TMS (Scharnowski et al 2009. In three experiments, we used light masks to investigate the time course of feature fusion for color, form, and motion. In experiment one, two verniers were presented in rapid succession with opposite offset directions. Subjects had to indicate the offset direction of the vernier. In a second experiment, a red and a green disk were presented in rapid succession, and subjects had to indicate whether the fused, yellow disk appeared rather than red or green. In a third experiment, three frames of random dots were presented successively. The first two frames created a percept of apparent motion to the upper right; and the last two frames, one to the upper left or vice versa. Subjects had to indicate the direction of motion. All stimuli were presented foveally. In all three experiments, we first balanced performance so that neither the first nor the second stimulus dominated the fused percept. In a second step, a light mask was presented either before, during, or after stimulus presentation. Depending on presentation time, the light masks modulated the fusion process so that either the first or the second stimulus dominated the percept. Our results show that unconscious feature fusion lasts more than five times longer than the actual stimulus duration, which indicates that individual features are stored for a substantial amount of time before they are integrated.

  8. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Hannah L. Golden

    2015-01-01

    Full Text Available Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13 and age-matched healthy individuals (n = 17 underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.

  9. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Golden, Hannah L; Agustus, Jennifer L; Goll, Johanna C; Downey, Laura E; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629

  10. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  11. Validation of a station-prototype designed to integrate temporally soil N2O fluxes: IPNOA Station prototype.

    Science.gov (United States)

    Laville, Patricia; Volpi, Iride; Bosco, Simona; Virgili, Giorgio; Neri, Simone; Continanza, Davide; Bonari, Enrico

    2016-04-01

    Nitrous oxide (N2O) flux measurements from agricultural soil surface still accounts for the scientific community as major challenge. The evaluations of integrated soil N2O fluxes are difficult because these emissions are lower than for the other greenhouse gases sources (CO2, CH4). They are also sporadic, because highly dependent on few environmental conditions acting as limiting factors. Within a LIFE project (IPNOA: LIFE11 ENV/IT/00032) a station prototype was developed to integrate annually N2O and CO2 emissions using automatically chamber technique. Main challenge was to develop a device enough durable to be able of measuring in continuous way CO2 and N2O fluxes with sufficient sensitivity to allow make reliable assessments of soil GHG measurements with minimal technical field interventions. The IPNOA station prototype was developed by West System SRL and was set up during 2 years (2014 -2015) in an experimental maize field in Tuscan. The prototype involved six automatic chambers; the complete measurement cycle was of 2 hours. Each chamber was closing during 20 min and biogas accumulations were monitoring in line with IR spectrometers. Auxiliary's measurements including soil temperatures and water contents as weather data were also monitoring. All data were managed remotely with the same acquisition software installed in the prototype control unit. The operation of the prototype during the two cropping years allowed testing its major features: its ability to evaluate the temporal variation of N2O soil fluxes during a long period with weather conditions and agricultural managements and to prove the interest to have continuous measurements of fluxes. The temporal distribution of N2O fluxes indicated that emissions can be very large and discontinuous over short periods less ten days and that during about 70% of the time N2O fluxes were around detection limit of the instrumentation, evaluated to 2 ng N ha-1 day-1. N2O emission factor assessments were 1.9% in 2014

  12. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  13. Biological impact of music and software-based auditory training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based training can improve these biological signals. These findings of biological plasticity, in a variety of subject populations, relate to attention and auditory memory, and represent an integrated auditory system influenced by both sensation and cognition. Learning outcomes The reader will (1) understand that the auditory system is malleable to experience and training, (2) learn the ingredients necessary for auditory learning to successfully be applied to communication, (3) learn that the auditory brainstem response to complex sounds (cABR) is a window into the integrated auditory system, and (4) see examples of how cABR can be used to track the outcome of experience and training. PMID:22789822

  14. Auditory aura in frontal opercular epilepsy: sounds from afar.

    Science.gov (United States)

    Thompson, Stephen A; Alexopoulos, Andreas; Bingaman, William; Gonzalez-Martinez, Jorge; Bulacio, Juan; Nair, Dileep; So, Norman K

    2015-06-01

    Auditory auras are typically considered to localize to the temporal neocortex. Herein, we present two cases of frontal operculum/perisylvian epilepsy with auditory auras. Following a non-invasive evaluation, including ictal SPECT and magnetoencephalography, implicating the frontal operculum, these cases were evaluated with invasive monitoring, using stereoelectroencephalography and subdural (plus depth) electrodes, respectively. Spontaneous and electrically-induced seizures showed an ictal onset involving the frontal operculum in both cases. A typical auditory aura was triggered by stimulation of the frontal operculum in one. Resection of the frontal operculum and subjacent insula rendered one case seizure- (and aura-) free. From a hodological (network) perspective, we discuss these findings with consideration of the perisylvian and insular network(s) interconnecting the frontal and temporal lobes, and revisit the non-invasive data, specifically that of ictal SPECT.

  15. Synchronization with audio and visual stimuli: Exploring multisensory integration and the role of spatio-temporal information

    OpenAIRE

    Armstrong, Alan

    2014-01-01

    Information lies at the very heart of the interaction between an individual and their environment, which has led many researchers to argue that the coupling constraining rhythmic coordination is informational. In an attempt to address this informational basis for perception-action this thesis explored the specific information from a given environmental stimulus that is used to control our actions. Namely, participants in the three studies synchronized wrist-pendulum movements with auditory an...

  16. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  17. The Brain As a Mixer, I. Preliminary Literature Review: Auditory Integration. Studies in Language and Language Behavior, Progress Report Number VII.

    Science.gov (United States)

    Semmel, Melvyn I.; And Others

    Methods to evaluate central hearing deficiencies and to localize brain damage are reviewed beginning with Bocca who showed that patients with temporal lobe tumors made significantly lower discrimination scores in the ear opposite the tumor when speech signals were distorted. Tests were devised to attempt to pinpoint brain damage on the basis of…

  18. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

  19. 2 years of integral monitoring of GRS 1915+105. II. X-ray spectro-temporal analysis

    DEFF Research Database (Denmark)

    Rodriguez, J.; Shaw, S.E.; Hannikainen, D.C.;

    2008-01-01

    state, while during the three others it showed cycles of X-ray dips and spikes (followed by radio flares). Through time-resolved spectroscopy of these cycles, we suggest that the soft X-ray spike is the trigger of the ejection. The ejected medium is then the coronal material responsible for the hard X-ray......This is the second paper presenting the results of 2 yr of monitoring of GRS 1915+105 with INTEGRAL, RXTE, and the Ryle Telescope. We present the X-ray spectral and temporal analysis of four observations showing strong radio to X-ray correlations. During one observation GRS 1915+105 was in a steady...... emission. In the steady state observation, the X-ray spectrum is indicative of the hard-intermediate state, with the presence of a relatively strong emission at 15 GHz. The X-ray spectrum is the sum of a Comptonized component and an extra power law extending to energies > 200 keV without any evidence for a...

  20. Two Years of INTEGRAL monitoring of GRS 1915+105 Part 2: X-Ray Spectro-Temporal Analysis

    CERN Document Server

    Rodríguez, J; Hannikainen, D C; Belloni, T; Corbel, S; Bel, M Cadolle; Chenevez, J; Prat, L; Kretschmar, P; Lehto, H J; Mirabel, I F; Paizis, A; Pooley, G; Tagger, M; Varniere, P; Cabanac, C; Vilhu, O

    2007-01-01

    (abridged) This is the second paper presenting the results of two years of monitoring of GRS 1915+105 with \\integral and \\rxte and the Ryle Telescope. We present the X-ray spectral and temporal analysis of four observations which showed strong radio to X-ray correlations. During one observation GRS 1915+105 was in a steady state, while during the three others it showed cycles of X-ray dips and spikes (followed by radio flares). We present the time-resolved spectroscopy of these cyclesand show that in all cases the hard X-ray component (the Comptonized emission from a coronal medium) is suppressed in coincidence with a soft X-ray spike that ends the cycle. We interpret these results as evidence that the soft X-ray spike is the trigger of the ejection, and that the ejected medium is the coronal material. In the steady state observation, the X-ray spectrum is indicative of the hard-intermediate state, with the presence of a relatively strong emission at 15 GHz. The X-ray spectra are the sum of a Comptonized comp...

  1. Multimodal Lexical Processing in Auditory Cortex Is Literacy Skill Dependent

    OpenAIRE

    McNorgan, Chris; Awati, Neha; Desroches, Amy S.; Booth, James R.

    2013-01-01

    Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others d...

  2. 整合视听连续执行测试与智商关系初探%Preliminary discussion of the relationship between integrated visual and auditory continuous performance test and intelligence quotient

    Institute of Scientific and Technical Information of China (English)

    冯冰; 杨良政; 周海鹰; 陈红

    2011-01-01

    目的:探讨整合视听连续执行测试各商数与智商间关系,寻找IVA - CPT测试的辅助诊断价值.方法:将IVA-CPT测试各商数与智商做直线相关分析.结果:智商与综合注意商数、反应控制商数、理解商数、视觉感觉统合商数呈正相关;ADHD注意缺陷型的智商较低.结论:IVA - CPT测试各商数与智商密切相关,ADHD注意缺陷型对智力影响较大.%Objective; To explore the relationship between integrated visual and auditory continuous performance test (IVA-CPT) and intelligence quotient, find out the value of IVA-CPT for assisted diagnosis. Methods; A linear correlation analysis was conduc-ted on the quotients of IVA - CPT and intelligence quotient. Results; There was a positive correlation between intelligence quotient and com-prehensive attention quotient, response control quotient, understanding quotient, visual sensory integration quotient; the intelligence quo-tients of children with attention deficit hyperactivity disorder (ADHD) of attention deficit type were relatively. Conclusion; The quotients of IVA - CPT are related to intelligence quotient closely, the impact of ADHD of attention deficit type on intelligence quotient is large.

  3. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  4. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  5. Improving depiction of temporal bone anatomy with low-radiation dose CT by an integrated circuit detector in pediatric patients: a preliminary study.

    Science.gov (United States)

    He, Jingzhen; Zu, Yuliang; Wang, Qing; Ma, Xiangxing

    2014-12-01

    The purpose of this study was to determine the performance of low-dose computed tomography (CT) scanning with integrated circuit (IC) detector in defining fine structures of temporal bone in children by comparing with the conventional detector. The study was performed with the approval of our institutional review board and the patients' anonymity was maintained. A total of 86 children0.05). The low-dose CT images acquired with the IC detector provide better depiction of fine osseous structures of temporal bone than that with the conventional DC detector.

  6. 听觉统合治疗孤独症儿童20例疗效分析%Analysis of the treatment effect of auditory integrative training on 20 autistic children

    Institute of Scientific and Technical Information of China (English)

    张朝; 于宗富; 黄晓玲; 王玲; 方俊明

    2011-01-01

    Objective: To explore the treatment effect of auditory integrative training (AIT) on autistic children and provide them with clinical support for rehabilitative treatment Methods: 20 cases of 2 ~ 4 years old autistic children were treated with auditory integrative training (AIT).The patients were investigated with Autism Behavior Checklist (ABC), Wechsler Intelligence Scale (WPPSI) and Gesell Development Scale and AIT Effect Scale.The effect was evaluated through the changes of clinical manifestations and the scores of ABC and IQ.Results: The scores of ABC dropped significantly ( P < 0.05 ) and the scores of IQ changed not significantly ( P > 0.05 ) six months later.The group had improved rapidly in many aspects such as the disorders of their language, social interactions, emotion disorders, sleeping,attention and sport skills.The group had not changed in their abnormal behaviors such as eye contact, stereotyped behavior and self -care.Conclusion: The auditory integrative training is positive to the autistic children in its short- term treatment effect.%目的:探讨听觉统合治疗对儿童孤独症的疗效,为孤独症儿童的康复治疗提供临床依据.方法:对20例2~4岁孤独症患儿采用数码听觉统合训练仪进行听觉统合治疗,疗效评估主要采用Gesell发育量表、韦氏幼儿智力测验(WPPSI)、孤独症行为量表(ABC)、AIT疗效调查表等方法,对患儿进行治疗前后ABC分值,IQ分值、临床症状比较.结果:治疗6个月后,患儿ABC分值显著降低(P<0.05);但IQ分值变化不显著(P>0.05);在语言障碍、社会交往障碍、情绪障碍、睡眠、注意力、运动技能临床症状上有较大的改善;在目光对视、刻板重复、生活自理方面改变不大.结论:听觉统合治疗对改善2~4岁孤独症儿童临床症状的近期治疗效果是肯定的,而且起效范围广.各级医疗保健部门、家长、幼儿园应加强对儿童孤独症的认识,及早发现患儿,及早治疗.

  7. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. PMID:26332785

  8. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  9. Auditory rhythmic cueing in movement rehabilitation: findings and possible mechanisms

    Science.gov (United States)

    Schaefer, Rebecca S.

    2014-01-01

    Moving to music is intuitive and spontaneous, and music is widely used to support movement, most commonly during exercise. Auditory cues are increasingly also used in the rehabilitation of disordered movement, by aligning actions to sounds such as a metronome or music. Here, the effect of rhythmic auditory cueing on movement is discussed and representative findings of cued movement rehabilitation are considered for several movement disorders, specifically post-stroke motor impairment, Parkinson's disease and Huntington's disease. There are multiple explanations for the efficacy of cued movement practice. Potentially relevant, non-mutually exclusive mechanisms include the acceleration of learning; qualitatively different motor learning owing to an auditory context; effects of increased temporal skills through rhythmic practices and motivational aspects of musical rhythm. Further considerations of rehabilitation paradigm efficacy focus on specific movement disorders, intervention methods and complexity of the auditory cues. Although clinical interventions using rhythmic auditory cueing do not show consistently positive results, it is argued that internal mechanisms of temporal prediction and tracking are crucial, and further research may inform rehabilitation practice to increase intervention efficacy. PMID:25385780

  10. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  11. 听觉统合治疗自闭症儿童的单一被试研究%Single case study on a child with autism treated by auditory integration training

    Institute of Scientific and Technical Information of China (English)

    张朝; 方俊明

    2012-01-01

    Objective: To explore the short-term therapeutic effect of auditory integration training (AIT) on a child with autism, provide a clinical basis for rehabilitation therapy of children with autism. Methods: A single ease study was conducted, one child aged three years and seven months were treated with AH. Results; The times of active language and active communication of the child increased obviously , while the duration time of cry and scream, the times of stereotyped behavior decreased sharply. AIT had a delay effect. Judging by the development tendency of regression line, compared with the first baseline period, the changes during treatment period and the second baseline period were large, showing a further improving trend in the direction of development. Conclusion; AIT has short - term therapeutic effect for treatment of the autistic child.%目的:探讨听觉统合治疗(auditory integration training,AIT)对自闭症儿童的近期疗效,为目闭症儿童的康复治疗提供临床依据.方法:采用4个单基线A-B-A单一被试实验设计,对1名3岁7个月自闭症儿童进行听觉统合治疗.结果:该患儿的主动语言次数、主动交往次数显著增加,哭闹持续时间、刻板行为的次数显著减少,且干预治疗有延时效应.从回归线的发展趋势来看,处理期和第二基线期较第一基线期的发展有较大的改变,在发展方向上有进一步改善症状的趋势.结论:AIT对该自闭症患儿的近期治疗有效.

  12. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole. PMID:20881120

  13. Early visual deprivation severely compromises the auditory sense of space in congenitally blind children.

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-06-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record PMID:27228448

  14. Early visual deprivation severely compromises the auditory sense of space in congenitally blind children.

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-06-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record

  15. Integrated community profiling indicates long-term temporal stability of the predominant faecal microbiota in captive cheetahs.

    Directory of Open Access Journals (Sweden)

    Anne A M J Becker

    Full Text Available Understanding the symbiotic relationship between gut microbes and their animal host requires characterization of the core microbiota across populations and in time. Especially in captive populations of endangered wildlife species such as the cheetah (Acinonyx jubatus, this knowledge is a key element to enhance feeding strategies and reduce gastrointestinal disorders. In order to investigate the temporal stability of the intestinal microbiota in cheetahs under human care, we conducted a longitudinal study over a 3-year period with bimonthly faecal sampling of 5 cheetahs housed in two European zoos. For this purpose, an integrated 16S rRNA DGGE-clone library approach was used in combination with a series of real-time PCR assays. Our findings disclosed a stable faecal microbiota, beyond intestinal community variations that were detected between zoo sample sets or between animals. The core of this microbiota was dominated by members of Clostridium clusters I, XI and XIVa, with mean concentrations ranging from 7.5-9.2 log10 CFU/g faeces and with significant positive correlations between these clusters (P<0.05, and by Lactobacillaceae. Moving window analysis of DGGE profiles revealed 23.3-25.6% change between consecutive samples for four of the cheetahs. The fifth animal in the study suffered from intermediate episodes of vomiting and diarrhea during the monitoring period and exhibited remarkably more change (39.4%. This observation may reflect the temporary impact of perturbations such as the animal's compromised health, antibiotic administration or a combination thereof, which temporarily altered the relative proportions of Clostridium clusters I and XIVa. In conclusion, this first long-term monitoring study of the faecal microbiota in feline strict carnivores not only reveals a remarkable compositional stability of this ecosystem, but also shows a qualitative and quantitative similarity in a defined set of faecal bacterial lineages across the five

  16. Areas of cat auditory cortex as defined by neurofilament proteins expressing SMI-32.

    Science.gov (United States)

    Mellott, Jeffrey G; Van der Gucht, Estel; Lee, Charles C; Carrasco, Andres; Winer, Jeffery A; Lomber, Stephen G

    2010-08-01

    The monoclonal antibody SMI-32 was used to characterize and distinguish individual areas of cat auditory cortex. SMI-32 labels non-phosphorylated epitopes on the high- and medium-molecular weight subunits of neurofilament proteins in cortical pyramidal cells and dendritic trees with the most robust immunoreactivity in layers III and V. Auditory areas with unique patterns of immunoreactivity included: primary auditory cortex (AI), second auditory cortex (AII), dorsal zone (DZ), posterior auditory field (PAF), ventral posterior auditory field (VPAF), ventral auditory field (VAF), temporal cortex (T), insular cortex (IN), anterior auditory field (AAF), and the auditory field of the anterior ectosylvian sulcus (fAES). Unique patterns of labeling intensity, soma shape, soma size, layers of immunoreactivity, laminar distribution of dendritic arbors, and labeled cell density were identified. Features that were consistent in all areas included: layers I and IV neurons are immunonegative; nearly all immunoreactive cells are pyramidal; and immunoreactive neurons are always present in layer V. To quantify the results, the numbers of labeled cells and dendrites, as well as cell diameter, were collected and used as tools for identifying and differentiating areas. Quantification of the labeling patterns also established profiles for ten auditory areas/layers and their degree of immunoreactivity. Areal borders delineated by SMI-32 were highly correlated with tonotopically-defined areal boundaries. Overall, SMI-32 immunoreactivity can delineate ten areas of cat auditory cortex and demarcate topographic borders. The ability to distinguish auditory areas with SMI-32 is valuable for the identification of auditory cerebral areas in electrophysiological, anatomical, and/or behavioral investigations.

  17. Long-term music training tunes how the brain temporally binds signals from multiple senses.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2011-12-20

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics-fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry. Critically, the premotor asynchrony effects predicted musicians' perceptual sensitivity to audiovisual asynchrony. Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds. This internal forward model furnishes more precise estimates of the relative audiovisual timings and hence, stronger prediction error signals specifically for asynchronous music in a premotor-cerebellar circuitry. Our findings show intimate links between action production and audiovisual temporal binding in perception. PMID:22114191

  18. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  19. Visual–auditory spatial processing in auditory cortical neurons

    OpenAIRE

    Bizley, Jennifer K.; King, Andrew J

    2008-01-01

    Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...

  20. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  1. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  2. Auditory processing efficiency deficits in children with developmental language impairments

    Science.gov (United States)

    Hartley, Douglas E. H.; Moore, David R.

    2002-12-01

    The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.

  3. How modality specific is processing of auditory and visual rhythms?

    Science.gov (United States)

    Pasinski, Amanda C; McAuley, J Devin; Snyder, Joel S

    2016-02-01

    The present study used ERPs to test the extent to which temporal processing is modality specific or modality general. Participants were presented with auditory and visual temporal patterns that consisted of initial two- or three-event beginning patterns. This delineated a constant standard time interval, followed by a two-event ending pattern delineating a variable test interval. Participants judged whether they perceived the pattern as a whole to be speeding up or slowing down. The contingent negative variation (CNV), a negative potential reflecting temporal expectancy, showed a larger amplitude for the auditory modality compared to the visual modality but a high degree of similarity in scalp voltage patterns across modalities, suggesting that the CNV arises from modality-general processes. A late, memory-dependent positive component (P3) also showed similar patterns across modalities.

  4. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... of sound as an active component in shaping urban environments. As urban conditions spreads globally, new scales, shapes and forms of communities appear and call for new distinctions and models in the study and representation of sonic environments. Particularly so, since urban environments are increasingly...... presents some terminologies for mapping urban environments through its sonic configuration. Such probing into the practices of acoustic territorialisation may direct attention to some of the conflicting and disharmonious interests defining public inclusive domains. The paper investigates the concept...

  5. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  6. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  7. 不同时距条件下面孔表情知觉的时间整合效应%Temporal Integration Effects in Facial Expression Recognition in Different Temporal Duration Condition

    Institute of Scientific and Technical Information of China (English)

    陈本友; 黄希庭

    2012-01-01

    通过把面孔表情分割成三部分,按照不同的时间间隔以及不同的呈现时间相继呈现,考察了被试对面孔表情的时间整合效果,以此探讨时间整合的加工过程和影响因素。结果发现:(1)面孔表情的时间整合效果受时间结构和刺激材料的影响。(2)分离呈现的面孔表情能否进行时间整合与S0A的大小有关。(3)面孔表情的时间整合存在类型差异。(4)面孔表情的时间整合是在一个有限的视觉缓冲器内进行的,图像记忆和长时记忆与面孔表情的时间整合过程关系密切。%Temporal integration is the process of perception processing,in which the successively separated stimuli are combined into a significant representation.It is a complicated process,which is known to be influenced by multiple factors,such as the temporal structure and stimulus components.Although this process has been explored in inter-stimulus interval in face perception,little is known about the temporal integration effect in facial expression recognition.More importantly,there has been no relevant evidence demonstrating that stimulus duration and stimulus category can affect the temporal integration of facial expression. In the present study,the part-whole judgment task was used to examine the influencing factors of temporal integration in the facial expression.In two experiments,each of three whole facial expression pictures was segmented into three parts,and each including a salient facial feature:eye,nose,or mouth.These parts were presented sequentially to the participants by some interval or presentation durations, with a fixed sequence:eye part first,nose followed,and mouth last.Following the last part,a mask,which eliminated effects of afterimages or other types of visual persistence was displayed.Then,participants were asked to judge the category of the facial expression, by pressing one of three number keys;"1","2" and "3",corresponding to anger,happy and

  8. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. PMID:27292114

  9. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise. PMID:23664946

  10. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise.

  11. High average power Yb:CaF2 femtosecond amplifier with integrated simultaneous spatial and temporal focusing for laser material processing

    Science.gov (United States)

    Squier, J.; Thomas, J.; Block, E.; Durfee, C.; Backus, S.

    2014-01-01

    A watt level, 10-kz repetition rate chirped pulse amplification system that has an integrated simultaneous spatial and temporal focusing (SSTF) processing system is demonstrated for the first time. SSTF significantly reduces nonlinear effects normally detrimental to beam control enabling the use of a low numerical aperture focus to quickly treat optically transparent materials over a large area. The integrated SSTF system has improved efficiency compared to previously reported SSTF designs, which combined with the high-repetition rate of the laser, further optimizes its capability to provide rapid, large volume processing.

  12. Effects of Presentation Rate and Attention on Auditory Discrimination: A Comparison of Long-Latency Auditory Evoked Potentials in School-Aged Children and Adults.

    Science.gov (United States)

    Choudhury, Naseem A; Parascando, Jessica A; Benasich, April A

    2015-01-01

    Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100-100 Hz; deviant 100-300 Hz) separated by a 300, 70 or 10 ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6-11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100-300 ms indexing sensory processing, and a later peak (LDN), at about 400-600 ms, thought to reflect reorientation to the deviant stimuli or "second-look" processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens' rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children's responses to rapid successive auditory stimulation requires an examination of both peaks. PMID:26368126

  13. Effects of Presentation Rate and Attention on Auditory Discrimination: A Comparison of Long-Latency Auditory Evoked Potentials in School-Aged Children and Adults.

    Directory of Open Access Journals (Sweden)

    Naseem A Choudhury

    Full Text Available Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs, specifically mismatch negativity (MMN responses, to paired tones (standard 100-100 Hz; deviant 100-300 Hz separated by a 300, 70 or 10 ms silent gap (ISI were recorded under Ignore and Attend conditions in 21 adults and 23 children (6-11 years old. In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard: an early peak (eMMN at about 100-300 ms indexing sensory processing, and a later peak (LDN, at about 400-600 ms, thought to reflect reorientation to the deviant stimuli or "second-look" processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens' rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children's responses to rapid successive auditory stimulation requires an examination of both peaks.

  14. Glutamate-bound NMDARs arising from in vivo-like network activity extend spatio-temporal integration in a L5 cortical pyramidal cell model.

    Directory of Open Access Journals (Sweden)

    Matteo Farinella

    2014-04-01

    Full Text Available In vivo, cortical pyramidal cells are bombarded by asynchronous synaptic input arising from ongoing network activity. However, little is known about how such 'background' synaptic input interacts with nonlinear dendritic mechanisms. We have modified an existing model of a layer 5 (L5 pyramidal cell to explore how dendritic integration in the apical dendritic tuft could be altered by the levels of network activity observed in vivo. Here we show that asynchronous background excitatory input increases neuronal gain and extends both temporal and spatial integration of stimulus-evoked synaptic input onto the dendritic tuft. Addition of fast and slow inhibitory synaptic conductances, with properties similar to those from dendritic targeting interneurons, that provided a 'balanced' background configuration, partially counteracted these effects, suggesting that inhibition can tune spatio-temporal integration in the tuft. Excitatory background input lowered the threshold for NMDA receptor-mediated dendritic spikes, extended their duration and increased the probability of additional regenerative events occurring in neighbouring branches. These effects were also observed in a passive model where all the non-synaptic voltage-gated conductances were removed. Our results show that glutamate-bound NMDA receptors arising from ongoing network activity can provide a powerful spatially distributed nonlinear dendritic conductance. This may enable L5 pyramidal cells to change their integrative properties as a function of local network activity, potentially allowing both clustered and spatially distributed synaptic inputs to be integrated over extended timescales.

  15. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  16. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  17. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  18. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  19. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  20. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  1. Effect of auditory integration training on neuropsychological development in children with expressive language disorder%听觉统合训练对表达性语言障碍儿童神经心理发育的作用

    Institute of Scientific and Technical Information of China (English)

    钱沁芳; 欧萍; 王章琼; 余秋娟; 谢燕钦; 杨式薇; 黄艳; 卢国斌; 杨闽燕

    2016-01-01

    目的 探讨听觉统合训练(auditory integration training,AIT)治疗表达性语言障碍儿童的近期效果.方法 168例表达性语言障碍儿童随机分成治疗组1(A组)、治疗组2(B组)、对照组(C组)和空白对照组(D组),每组各42例.运用脑干诱发电位仪行ASSR测试判定听觉敏感频率,数码听觉统合训练仪进行AIT治疗.采用0~6岁小儿神经心理发育量表、AIT疗效调查表,以治疗前后3个月适应能力、语言、社交行为发育商评分差异及A组训练前后1个月、3个月临床症状的变化来评估整体疗效.结果 治疗3个月后,A、B组语言DQ、社交行为DQ较治疗前明显提高(t=-12.104~-2.790,均P<0.01),而C组3个能区的DQ提高不明显(t=-1.655-1.193,均P>0.05),D组3个能区的DQ较治疗前明显降低(t=2.509~3.371,均P<0.05).治疗后3个月四组患儿语言及社交行为DQ差异有统计学意义(F=16.192 ~ 35.544,均P<0.01).B组语言DQ、社交行为DQ分值最高[(82.90±10.39)分,(86.51±7.47)分],A组其次[(73.75±15.45)分,(83.91±9.20)分].A、B组与空白对照组适应能力DQ比较差异有统计学意义(P<0.05),A组与对照组语言、社交行为2个能区的DQ比较差异有统计学意义(P<0.05).A组训练1个月后在语言、社会交往、情绪上平均总有效率为78.57%、65.24%、19.05%;训练3个月后在语言、社会交往、情绪上平均总有效率为86.31%、73.81%、43.65%.结论 AIT能提高表达性语言障碍儿童语言、社交行为的发育水平,对改善表达性语言障碍儿童的语言表达、社会交往能力及情绪有一定疗效.%Objective To explore the short-term efficacy of auditory integration training (AIT) on neuropsychological development in children with expressive language disorder.Methods 168 cases diagnosed as expressive language disorder children were randomly divided into 4 groups with 42 cases in each group,namely as the treatment group 1 (Group A),the treatment

  2. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  3. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  4. CT findings of the osteoma of the external auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ha Young; Song, Chang Joon; Yoon, Chung Dae; Park, Mi Hyun; Shin, Byung Seok [Chungnam National University, School of Medicine, Daejeon (Korea, Republic of)

    2006-07-15

    We wanted to report the CT image findings of the osteoma of the external auditory canal. Temporal bone CT scanning was performed on eight patients (4 males and 4 females aged between 8 and 41 years) with pathologically proven osteoma of the external auditory canal after operation, and the findings of the CT scanning were retrospectively reviewed. Not only did we analyze the size, shape, distribution and location of the osteomas, we also analyzed the relationship between the lesion and the tympanosqumaous or tympanomastoid suture line, and the changes seen on the CT scan images for the patients who were able to undergo follow-up. All the lesions of the osteoma of the external auditory canal were unilateral, solitary, pedunculated bony masses. In five patients, the osteomas occurred on the left side and for the other three patients, the osteomas occurred on the right side. The average size of the osteoma was 0.6 cm with the smallest being 0.5 cm and the largest being 1.2 cm. Each of the lesions was located at the osteochondral junction in the terminal part of the osseous external ear canal. The stalk of the osteoma of the external auditory canal was found to have occurred in the anteroinferior wall in five cases (63%), in the anterosuperior wall (the tympanosqumaous suture line) in two cases (25%), and in the anterior wall in one case. The osteoma of the external auditory canal was a compact form in five cases and it was a cancellous form in three cases. One case of the cancellous form was changed into a compact form 35 months later due to the advanced ossification. Osteoma of the external auditory canal developed in a unilateral and solitary fashion. The characteristic image findings show that it is attached to the external auditory canal by its stalk. Unlike our common knowledge about its occurrence, osteoma mostly occurred in the tympanic wall, and this is regardless of the tympanosquamous or tympanomastoid suture line.

  5. Temporal coding in rhythm tasks revealed by modality effects.

    Science.gov (United States)

    Glenberg, A M; Jona, M

    1991-09-01

    Temporal coding has been studied by examining the perception and reproduction of rhythms and by examining memory for the order of events in a list. We attempt to link these research programs both empirically and theoretically. Glenberg and Swanson (1986) proposed that the superior recall of auditory material, compared with visual material, reflects more accurate temporal coding for the auditory material. In this paper, we demonstrate that a similar modality effect can be produced in a rhythm task. Auditory rhythms composed of stimuli of two durations are reproduced more accurately than are visual rhythms. Furthermore, it appears that the auditory superiority reflects enhanced chunking of the auditory material rather than better identification of durations. PMID:1956312

  6. Rhythm implicitly affects temporal orienting of attention across modalities.

    Science.gov (United States)

    Bolger, Deirdre; Trost, Wiebke; Schön, Daniele

    2013-02-01

    Here we present two experiments investigating the implicit orienting of attention over time by entrainment to an auditory rhythmic stimulus. In the first experiment, participants carried out a detection and discrimination tasks with auditory and visual targets while listening to an isochronous, auditory sequence, which acted as the entraining stimulus. For the second experiment, we used musical extracts as entraining stimulus, and tested the resulting strength of entrainment with a visual discrimination task. Both experiments used reaction times as a dependent variable. By manipulating the appearance of targets across four selected metrical positions of the auditory entraining stimulus we were able to observe how entraining to a rhythm modulates behavioural responses. That our results were independent of modality gives a new insight into cross-modal interactions between auditory and visual modalities in the context of dynamic attending to auditory temporal structure.

  7. Neuromagnetic evidence for early auditory restoration of fundamental pitch.

    Directory of Open Access Journals (Sweden)

    Philip J Monahan

    Full Text Available BACKGROUND: Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset. METHODOLOGY: Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz, while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz, such that the restored fundamental (also knows as "virtual pitch" changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component. PRINCIPAL FINDINGS: We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies. CONCLUSIONS: Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.

  8. Automatically detecting auditory P300 in several trials

    Institute of Scientific and Technical Information of China (English)

    莫少锋; 汤井田; 陈洪波

    2015-01-01

    A method was demonstrated based on Infomax independent component analysis (Infomax ICA) for automatically extracting auditory P300 signals within several trials. A signaling equilibrium algorithm was proposed to enhance the effectiveness of the Infomax ICA decomposition. After the mixed signal was decomposed by Infomax ICA, the independent component (IC) used in auditory P300 reconstruction was automatically chosen by using the standard deviation of the fixed temporal pattern. And the result of auditory P300 was reconstructed using the selected ICs. The experimental results show that the auditory P300 can be detected automatically within five trials. The Pearson correlation coefficient between the standard signal and the signal detected using the proposed method is significantly greater than that between the standard signal and the signal detected using the average method within five trials. The wave pattern result obtained using the proposed algorithm is better and more similar to the standard signal than that obtained by the average method for the same number of trials. Therefore, the proposed method can automatically detect the effective auditory P300 within several trials.

  9. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  10. The Perception of Auditory Motion.

    Science.gov (United States)

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  11. Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-06-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability -100 and 300ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets. PMID:27094850

  12. Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-06-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability -100 and 300ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets.

  13. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  14. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  15. Ectopic external auditory canal and ossicular formation in the oculo-auriculo-vertebral spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Supakul, Nucharin [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ramathibodi Hospital, Mahidol University, Department of Diagnostic and Therapeutic Radiology, Bangkok (Thailand); Kralik, Stephen F. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ho, Chang Y. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Riley Children' s Hospital, MRI Department, Indianapolis, IN (United States)

    2015-07-15

    Ear abnormalities in oculo-auricular-vertebral spectrum commonly present with varying degrees of external and middle ear atresias, usually in the expected locations of the temporal bone and associated soft tissues, without ectopia of the external auditory canal. We present the unique imaging of a 4-year-old girl with right hemifacial microsomia and ectopic location of an atretic external auditory canal, terminating in a hypoplastic temporomandibular joint containing bony structures with the appearance of auditory ossicles. This finding suggests an early embryological dysfunction involving Meckel's cartilage of the first branchial arch. (orig.)

  16. Effects of Auditory Rhythm and Music on Gait Disturbances in Parkinson's Disease.

    Science.gov (United States)

    Ashoori, Aidin; Eagleman, David M; Jankovic, Joseph

    2015-01-01

    Gait abnormalities, such as shuffling steps, start hesitation, and freezing, are common and often incapacitating symptoms of Parkinson's disease (PD) and other parkinsonian disorders. Pharmacological and surgical approaches have only limited efficacy in treating these gait disorders. Rhythmic auditory stimulation (RAS), such as playing marching music and dance therapy, has been shown to be a safe, inexpensive, and an effective method in improving gait in PD patients. However, RAS that adapts to patients' movements may be more effective than rigid, fixed-tempo RAS used in most studies. In addition to auditory cueing, immersive virtual reality technologies that utilize interactive computer-generated systems through wearable devices are increasingly used for improving brain-body interaction and sensory-motor integration. Using multisensory cues, these therapies may be particularly suitable for the treatment of parkinsonian freezing and other gait disorders. In this review, we examine the affected neurological circuits underlying gait and temporal processing in PD patients and summarize the current studies demonstrating the effects of RAS on improving these gait deficits. PMID:26617566

  17. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323

  18. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  19. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  20. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  1. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  2. Losing the beat: deficits in temporal coordination

    Science.gov (United States)

    Palmer, Caroline; Lidji, Pascale; Peretz, Isabelle

    2014-01-01

    Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961–969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception–action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model's parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals. PMID:25385783

  3. The effect of exogenous spatial attention on auditory information processing.

    OpenAIRE

    Kanai, Kenichi; Ikeda, Kazuo; Tayama, Tadayuki

    2007-01-01

    This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which wa...

  4. Temporal kinetics of Marek's disease herpesvirus: integration occurs early after infection in both B and T cells

    Science.gov (United States)

    Marek's disease virus (MDV) is an oncogenic a-herpesvirus that induces Marek's disease characterized by fatal lymphomas in chickens. Here, we explored the timing during pathogenesis when the virus integrates into the host genome, the cell type involved, the role of viral integration on cellular tran...

  5. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    of Brodmann, more intense in the contralateral (right) side. There is activation of both frontal executive areas without lateralization. Simultaneously, while area 39 of Brodmann was being activated, the temporal lobe was being inhibited. This seemingly not previously reported functional observation is suggestive that also inhibitory and not only excitatory relays play a role in the auditory pathways. The central activity in our patient (without external auditory stimuli) -who was tested while having musical hallucinations- was a mirror image of that of our normal stimulated volunteers. It is suggested that the trigger role of the inner ear -if any- could conceivably be inhibitory, desinhibitory and not necessarily purely excitatory. Based on our observations the trigger effect in our patient, could occur via the left ear. Finally, our functional studies are suggestive that auditory memory for musical perceptions could be seemingly located in the right area 39 of Brodm

  6. Heritability of non-speech auditory processing skills.

    Science.gov (United States)

    Brewer, Carmen C; Zalewski, Christopher K; King, Kelly A; Zobay, Oliver; Riley, Alison; Ferguson, Melanie A; Bird, Jonathan E; McCabe, Margaret M; Hood, Linda J; Drayna, Dennis; Griffith, Andrew J; Morell, Robert J; Friedman, Thomas B; Moore, David R

    2016-08-01

    Recent insight into the genetic bases for autism spectrum disorder, dyslexia, stuttering, and language disorders suggest that neurogenetic approaches may also reveal at least one etiology of auditory processing disorder (APD). A person with an APD typically has difficulty understanding speech in background noise despite having normal pure-tone hearing sensitivity. The estimated prevalence of APD may be as high as 10% in the pediatric population, yet the causes are unknown and have not been explored by molecular or genetic approaches. The aim of our study was to determine the heritability of frequency and temporal resolution for auditory signals and speech recognition in noise in 96 identical or fraternal twin pairs, aged 6-11 years. Measures of auditory processing (AP) of non-speech sounds included backward masking (temporal resolution), notched noise masking (spectral resolution), pure-tone frequency discrimination (temporal fine structure sensitivity), and nonsense syllable recognition in noise. We provide evidence of significant heritability, ranging from 0.32 to 0.74, for individual measures of these non-speech-based AP skills that are crucial for understanding spoken language. Identification of specific heritable AP traits such as these serve as a basis to pursue the genetic underpinnings of APD by identifying genetic variants associated with common AP disorders in children and adults. PMID:26883091

  7. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  8. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  9. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  10. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Monica eGori; Tiziana eVercillo; Giulio eSandini; David eBurr

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  11. High gamma activity in response to deviant auditory stimuli recorded directly from human cortex.

    Science.gov (United States)

    Edwards, Erik; Soltani, Maryam; Deouell, Leon Y; Berger, Mitchel S; Knight, Robert T

    2005-12-01

    We recorded electrophysiological responses from the left frontal and temporal cortex of awake neurosurgical patients to both repetitive background and rare deviant auditory stimuli. Prominent sensory event-related potentials (ERPs) were recorded from auditory association cortex of the temporal lobe and adjacent regions surrounding the posterior Sylvian fissure. Deviant stimuli generated an additional longer latency mismatch response, maximal at more anterior temporal lobe sites. We found low gamma (30-60 Hz) in auditory association cortex, and we also show the existence of high-frequency oscillations above the traditional gamma range (high gamma, 60-250 Hz). Sensory and mismatch potentials were not reliably observed at frontal recording sites. We suggest that the high gamma oscillations are sensory-induced neocortical ripples, similar in physiological origin to the well-studied ripples of the hippocampus. PMID:16093343

  12. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  13. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability.

    Science.gov (United States)

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-07-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  14. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  15. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  16. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  17. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...

  18. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  19. The critical role of Golgi cells in regulating spatio-temporal integration and plasticity at the cerebellum input stage

    Directory of Open Access Journals (Sweden)

    2008-07-01

    Full Text Available After the discovery at the end of the 19th century (Golgi, 1883, the Golgi cell was precisely described by S.R. y Cajal (see Cajal, 1987, 1995 and functionally identified as an inhibitory interneuron 50 years later by J.C. Eccles and colleagues (Eccles e al., 1967. Then, its role has been casted by Marr (1969 within the Motor Learning Theory as a codon size regulator of granule cell activity. It was immediately clear that Golgi cells had to play a critical role, since they are the main inhibitory interneuron of the granular layer and control activity of as many as 100 millions granule cells. In vitro, Golgi cells show pacemaking, resonance, phase-reset and rebound-excitation in the theta-frequency band. These properties are likely to impact on their activity in vivo, which shows irregular spontaneous beating modulated by sensory inputs and burst responses to punctuate stimulation followed by a silent pause. Moreover, investigations have given insight into Golgi cells connectivity within the cerebellar network and on their impact on the spatio-temporal organization of activity. It turns out that Golgi cells can control both the temporal dynamics and the spatial distribution of information transmitted through the cerebellar network. Moreover, Golgi cells regulate the induction of long-term synaptic plasticity at the mossy fiber - granule cell synapse. Thus, the concept is emerging that Golgi cells are of critical importance for regulating granular layer network activity bearing important consequences for cerebellar computation as a whole.

  20. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    . PATIENTS: Ten women and 10 men with previously untreated primary cancer. Median age at diagnosis was 67 years (range, 31-87 years). Survival data included 18 patients with at least 2 years of follow-up or recurrence. INTERVENTION: Local canal resection or partial temporal bone resection. MAIN OUTCOME......OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000...... MEASURE: Recurrence rate. RESULTS: Half of the patients had squamous cell carcinoma. Thirteen of the patients had stage I tumor (65%), 2 had stage II (10%), 2 had stage III (10%), and 3 had stage IV tumor (15%). Twelve patients were cured. All patients with stage I or II cancers were cured except 1...

  1. Spectral and temporal properties of the supergiant fast X-ray transient IGR J18483-0311 observed by INTEGRAL

    CERN Document Server

    Ducci, L; Sasaki, M; Santangelo, A; Esposito, P; Romano, P; Vercellone, S

    2013-01-01

    IGR J18483-0311 is a supergiant fast X-ray transient whose compact object is located in a wide (18.5 d) and eccentric (e~0.4) orbit, which shows sporadic outbursts that reach X-ray luminosities of ~1e36 erg/s. We investigated the timing properties of IGR J18483-0311 and studied the spectra during bright outbursts by fitting physical models based on thermal and bulk Comptonization processes for accreting compact objects. We analysed archival INTEGRAL data collected in the period 2003-2010, focusing on the observations with IGR J18483-0311 in outburst. We searched for pulsations in the INTEGRAL light curves of each outburst. We took advantage of the broadband observing capability of INTEGRAL for the spectral analysis. We observed 15 outbursts, seven of which we report here for the first time. This data analysis almost doubles the statistics of flares of this binary system detected by INTEGRAL. A refined timing analysis did not reveal a significant periodicity in the INTEGRAL observation where a ~21s pulsation w...

  2. The Neurophysiology of Auditory Hallucinations – A Historic and Contemporary Review

    Directory of Open Access Journals (Sweden)

    Remko evan Lutterveld

    2011-05-01

    Full Text Available Electroencephalography (EEG and magnetoencephalography (MEG are two techniques that distinguish themselves from other neuroimaging methodologies through their ability to directly measure brain-related activity and their high temporal resolution. A large body of research has applied these techniques to study auditory hallucinations. Across a variety of approaches, the left superior temporal cortex is consistently reported to be involved in this symptom. Moreover, there is increasing evidence that a failure in corollary discharge, i.e. a neural signal originating in frontal speech areas that indicates to sensory areas that forthcoming thought is self-generated, may underlie the experience of auditory hallucinations

  3. Brainstem auditory evoked potentials in children with lead exposure

    Directory of Open Access Journals (Sweden)

    Katia de Freitas Alvarenga

    2015-02-01

    Full Text Available Introduction: Earlier studies have demonstrated an auditory effect of lead exposure in children, but information on the effects of low chronic exposures needs to be further elucidated. Objective: To investigate the effect of low chronic exposures of the auditory system in children with a history of low blood lead levels, using an auditory electrophysiological test. Methods: Contemporary cross-sectional cohort. Study participants underwent tympanometry, pure tone and speech audiometry, transient evoked otoacoustic emissions, and brainstem auditory evoked potentials, with blood lead monitoring over a period of 35.5 months. The study included 130 children, with ages ranging from 18 months to 14 years, 5 months (mean age 6 years, 8 months ± 3 years, 2 months. Results: The mean time-integrated cumulative blood lead index was 12 µg/dL (SD ± 5.7, range:2.433. All participants had hearing thresholds equal to or below 20 dBHL and normal amplitudes of transient evoked otoacoustic emissions. No association was found between the absolute latencies of waves I, III, and V, the interpeak latencies I-III, III-V, and I-V, and the cumulative lead values. Conclusion: No evidence of toxic effects from chronic low lead exposures was observed on the auditory function of children living in a lead contaminated area.

  4. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  5. A spatio-temporal reconstruction of sea surface temperature during Dansgaard-Oeschger events from model-data integration

    Science.gov (United States)

    Jensen, Mari F.; Nummelin, Aleksi; Borg Nielsen, Søren; Sadatzki, Henrik; Sessford, Evangeline; Kleppin, Hannah; Risebrobakken, Bjørg; Born, Andreas

    2016-04-01

    Proxy data suggests a large variability in the North Atlantic sea surface temperature (SST) and sea ice cover during the Dansgaard Oeschger (DO) events of the last glacial. However, the mechanisms behind these changes are still debated. It is not clear whether the ocean temperatures are controlled by forced changes in the northward ocean heat transport or by local surface fluxes, or if, instead, the SST changes can be explained by internal variability. We address these questions by analyzing a full DO event using proxy-surrogate reconstructions. This method provides a means to extrapolate the temporally accurate information from scarce proxy reconstructions with the spatial and physical consistency of climate models. Model simulations are treated as a pool of possible ocean states from which the closest match to the reconstructions, e.g., one model year, is selected based on an objective cost function. The original chronology of the model is replaced by that of the proxy data. Repeating this algorithm for each proxy time step yields a comprehensive four-dimensional dataset that is consistent with reconstructed data. In addition, the solution also includes variables and locations for which no reconstructions exist. We show that by only using climate model data from the preindustrial control simulations, we are able to reconstruct the SST variability in the subpolar gyre region over the DO event. In the eastern Nordic Seas, on the other hand, we lack the amplitude of the variations while capturing the temporal pattern. Based on our analysis, we suggest that the variability of the subpolar gyre during the analyzed DO event can be explained by internal variability of the climate system alone. Further research is needed to explain whether the lacking amplitude in the Nordic Seas is due to the model deficiencies or if external forcing or some feedback mechanisms could give rise to larger SST variability.

  6. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    Science.gov (United States)

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  7. Different levels of Ih determine distinct temporal integration in bursting and regular-spiking neurons in rat subiculum.

    NARCIS (Netherlands)

    I. van Welie; M.W.H. Remme; J.A. van Hooft; W.J. Wadman

    2006-01-01

    Pyramidal neurons in the subiculum typically display either bursting or regular-spiking behaviour. Although this classification into two neuronal classes is well described, it is unknown how these two classes of neurons contribute to the integration of input to the subiculum. Here, we report that bu

  8. Gamma-ray bursts observed by the INTEGRAL-SPI anticoincidence shield: A study of individual pulses and temporal variability

    DEFF Research Database (Denmark)

    Ryde, F.; Borgonovo, L.; Larsson, S.;

    2003-01-01

    We study a set of 28 GRB light-curves detected between 15 December 2002 and 9 June 2003 by the anti-coincidence shield of the spectrometer (SPI) of INTEGRAL. During this period it has detected 50 bursts, that have been confirmed by other instruments, with a time resolution of 50 ms. First, we...

  9. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  10. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  11. Auditory hedonic phenotypes in dementia: A behavioural and neuroanatomical analysis.

    Science.gov (United States)

    Fletcher, Phillip D; Downey, Laura E; Golden, Hannah L; Clark, Camilla N; Slattery, Catherine F; Paterson, Ross W; Schott, Jonathan M; Rohrer, Jonathan D; Rossor, Martin N; Warren, Jason D

    2015-06-01

    Patients with dementia may exhibit abnormally altered liking for environmental sounds and music but such altered auditory hedonic responses have not been studied systematically. Here we addressed this issue in a cohort of 73 patients representing major canonical dementia syndromes (behavioural variant frontotemporal dementia (bvFTD), semantic dementia (SD), progressive nonfluent aphasia (PNFA) amnestic Alzheimer's disease (AD)) using a semi-structured caregiver behavioural questionnaire and voxel-based morphometry (VBM) of patients' brain MR images. Behavioural responses signalling abnormal aversion to environmental sounds, aversion to music or heightened pleasure in music ('musicophilia') occurred in around half of the cohort but showed clear syndromic and genetic segregation, occurring in most patients with bvFTD but infrequently in PNFA and more commonly in association with MAPT than C9orf72 mutations. Aversion to sounds was the exclusive auditory phenotype in AD whereas more complex phenotypes including musicophilia were common in bvFTD and SD. Auditory hedonic alterations correlated with grey matter loss in a common, distributed, right-lateralised network including antero-mesial temporal lobe, insula, anterior cingulate and nucleus accumbens. Our findings suggest that abnormalities of auditory hedonic processing are a significant issue in common dementias. Sounds may constitute a novel probe of brain mechanisms for emotional salience coding that are targeted by neurodegenerative disease. PMID:25929717

  12. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  13. Objective Audio Quality Assessment Based on Spectro-Temporal Modulation Analysis

    OpenAIRE

    Guo, Ziyuan

    2011-01-01

    Objective audio quality assessment is an interdisciplinary research area that incorporates audiology and machine learning. Although much work has been made on the machine learning aspect, the audiology aspect also deserves investigation. This thesis proposes a non-intrusive audio quality assessment algorithm, which is based on an auditory model that simulates human auditory system. The auditory model is based on spectro-temporal modulation analysis of spectrogram, which has been proven to be ...

  14. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    OpenAIRE

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading v...

  15. Enhanced spontaneous functional connectivity of the superior temporal gyrus in early deafness

    OpenAIRE

    Hao Ding; Dong Ming; Baikun Wan; Qiang Li; Wen Qin; Chunshui Yu

    2016-01-01

    Early auditory deprivation may drive the auditory cortex into cross-modal processing of non-auditory sensory information. In a recent study, we had shown that early deaf subjects exhibited increased activation in the superior temporal gyrus (STG) bilaterally during visual spatial working memory; however, the changes in the organization of the STG related spontaneous functional network, and their cognitive relevance are still unknown. To clarify this issue, we applied resting state functional ...

  16. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Directory of Open Access Journals (Sweden)

    Jean-Luc Schwartz

    2014-07-01

    Full Text Available An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  17. Interaural cross correlation of event-related potentials and diffusion tensor imaging in the evaluation of auditory processing disorder: a case study.

    Science.gov (United States)

    Jerger, James; Martin, Jeffrey; McColl, Roderick

    2004-01-01

    In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103

  18. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Science.gov (United States)

    Schwartz, Jean-Luc; Savariaux, Christophe

    2014-07-01

    An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  19. Phonetic Feature Encoding in Human Superior Temporal Gyrus

    Science.gov (United States)

    Mesgarani, Nima; Cheung, Connie; Johnson, Keith; Chang, Edward F.

    2015-01-01

    During speech perception, linguistic elements such as consonants and vowels are extracted from a complex acoustic speech signal. The superior temporal gyrus (STG) participates in high-order auditory processing of speech, but how it encodes phonetic information is poorly understood. We used high-density direct cortical surface recordings in humans while they listened to natural, continuous speech to reveal the STG representation of the entire English phonetic inventory. At single electrodes, we found response selectivity to distinct phonetic features. Encoding of acoustic properties was mediated by a distributed population response. Phonetic features could be directly related to tuning for spectrotemporal acoustic cues, some of which were encoded in a nonlinear fashion or by integration of multiple cues. These findings demonstrate the acoustic-phonetic representation of speech in human STG. PMID:24482117

  20. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia.

    Science.gov (United States)

    Dale, Corby L; Brown, Ethan G; Fisher, Melissa; Herman, Alexander B; Dowling, Anne F; Hinkley, Leighton B; Subramaniam, Karuna; Nagarajan, Srikantan S; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  1. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  2. Multimodal emotion perception after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Valérie eMilesi

    2014-05-01

    Full Text Available In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion.

  3. Placing prairie pothole wetlands along spatial and temporal continua to improve integration of wetland function in ecological investigations

    Science.gov (United States)

    Euliss, Ned H.; Mushet, David M.; Newton, Wesley E.; Otto, Clint R.V.; Nelson, Richard D.; LaBaugh, James W.; Scherff, Eric J.; Rosenberry, Donald O.

    2014-01-01

    We evaluated the efficacy of using chemical characteristics to rank wetland relation to surface and groundwater along a hydrologic continuum ranging from groundwater recharge to groundwater discharge. We used 27 years (1974–2002) of water chemistry data from 15 prairie pothole wetlands and known hydrologic connections of these wetlands to groundwater to evaluate spatial and temporal patterns in chemical characteristics that correspond to the unique ecosystem functions each wetland performed. Due to the mineral content and the low permeability rate of glacial till and soils, salinity of wetland waters increased along a continuum of wetland relation to groundwater recharge, flow-through or discharge. Mean inter-annual specific conductance (a proxy for salinity) increased along this continuum from wetlands that recharge groundwater being fresh to wetlands that receive groundwater discharge being the most saline, and wetlands that both recharge and discharge to groundwater (i.e., groundwater flow-through wetlands) being of intermediate salinity. The primary axis from a principal component analysis revealed that specific conductance (and major ions affecting conductance) explained 71% of the variation in wetland chemistry over the 27 years of this investigation. We found that long-term averages from this axis were useful to identify a wetland’s long-term relation to surface and groundwater. Yearly or seasonal measurements of specific conductance can be less definitive because of highly dynamic inter- and intra-annual climate cycles that affect water volumes and the interaction of groundwater and geologic materials, and thereby influence the chemical composition of wetland waters. The influence of wetland relation to surface and groundwater on water chemistry has application in many scientific disciplines and is especially needed to improve ecological understanding in wetland investigations. We suggest ways that monitoring in situ wetland conditions could be linked

  4. Spectral and temporal properties of long GRBs detected by INTEGRAL from 3 keV to 8 MeV

    DEFF Research Database (Denmark)

    Martin-Carrillo, A.; Topinka, M.; Hanlon, L.;

    2010-01-01

    Since its launch in 2002, INTEGRAL has triggered onmore than 78 g –ray bursts in the 20-200 keV energy range with the IBIS/ISGRI instrument. Almost 30% of these bursts occurred within the fully coded field of view of the JEM-X detector (5) which operates in the 3-35 keV energy range. A detailed s...

  5. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  6. Monitoring Tree Population Dynamics in Arid Zone Through Multiple Temporal Scales: Integration of Spatial Analysis, Change Detection and Field Long Term Monitoring

    Science.gov (United States)

    Isaacson, S.; Rachmilevitch, S.; Ephrath, J. E.; Maman, S.; Blumberg, D. G.

    2016-06-01

    High mortality rates and lack of recruitment in the acacia populations throughout the Negev Desert and the Arava rift valley of Israel have been reported in previous studies. However, it is difficult to determine whether these reports can be evidence to a significant decline trend of the trees populations. This is because of the slow dynamic processes of acaia tree populations and the lack of long term continuous monitoring data. We suggest a new data analysis technique that expands the time scope of the field long term monitoring of trees in arid environments. This will enables us to improve our understanding of the spatial and temporal changes of these populations. We implemented two different approaches in order to expand the time scope of the acacia population field survey: (1) individual based tree change detection using Corona satellite images and (2) spatial analysis of trees population, converting spatial data into temporal data. The next step was to integrate the results of the two analysis techniques (change detection and spatial analysis) with field monitoring. This technique can be implemented to other tree populations in arid environments to help assess the vegetation conditions and dynamics of those ecosystems.

  7. MONITORING TREE POPULATION DYNAMICS IN ARID ZONE THROUGH MULTIPLE TEMPORAL SCALES: INTEGRATION OF SPATIAL ANALYSIS, CHANGE DETECTION AND FIELD LONG TERM MONITORING

    Directory of Open Access Journals (Sweden)

    S. Isaacson

    2016-06-01

    Full Text Available High mortality rates and lack of recruitment in the acacia populations throughout the Negev Desert and the Arava rift valley of Israel have been reported in previous studies. However, it is difficult to determine whether these reports can be evidence to a significant decline trend of the trees populations. This is because of the slow dynamic processes of acaia tree populations and the lack of long term continuous monitoring data. We suggest a new data analysis technique that expands the time scope of the field long term monitoring of trees in arid environments. This will enables us to improve our understanding of the spatial and temporal changes of these populations. We implemented two different approaches in order to expand the time scope of the acacia population field survey: (1 individual based tree change detection using Corona satellite images and (2 spatial analysis of trees population, converting spatial data into temporal data. The next step was to integrate the results of the two analysis techniques (change detection and spatial analysis with field monitoring. This technique can be implemented to other tree populations in arid environments to help assess the vegetation conditions and dynamics of those ecosystems.

  8. Temporal Succession of Phytoplankton Assemblages in a Tidal Creek System of the Sundarbans Mangroves: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Dola Bhattacharjee

    2013-01-01

    Full Text Available Sundarbans, the world's largest mangrove ecosystem, is unique and biologically diverse. A study was undertaken to track temporal succession of phytoplankton assemblages at the generic level (≥10 µm encompassing 31 weeks of sampling (June 2010–May 2011 in Sundarbans based on microscopy and hydrological measurements. As part of this study, amplification and sequencing of type ID rbcL subunit of RuBisCO enzyme were also applied to infer chromophytic algal groups (≤10 µm size from one of the study points. We report the presence of 43 genera of Bacillariophyta, in addition to other phytoplankton groups, based on microscopy. Phytoplankton cell abundance, which was highest in winter and spring, ranged between 300 and 27,500 cells/L during this study. Cell biovolume varied between winter of 2010 (90–35281.04 µm3 and spring-summer of 2011 (52–33962.24 µm3. Winter supported large chain forming diatoms, while spring supported small sized diatoms, followed by other algal groups in summer. The clone library approach showed dominance of Bacillariophyta-like sequences, in addition to Cryptophyta-, Haptophyta-, Pelagophyta-, and Eustigmatophyta-like sequences which were detected for the first time highlighting their importance in mangrove ecosystem. This study clearly shows that a combination of microscopy and molecular tools can improve understanding of phytoplankton assemblages in mangrove environments.

  9. Hearing Mechanisms and Noise Metrics Related to Auditory Masking in Bottlenose Dolphins (Tursiops truncatus).

    Science.gov (United States)

    Branstetter, Brian K; Bakhtiari, Kimberly L; Trickey, Jennifer S; Finneran, James J

    2016-01-01

    Odontocete cetaceans are acoustic specialists that depend on sound to hunt, forage, navigate, detect predators, and communicate. Auditory masking from natural and anthropogenic sound sources may adversely affect these fitness-related capabilities. The ability to detect a tone in a broad range of natural, anthropogenic, and synthesized noise was tested with bottlenose dolphins using a psychophysical, band-widening procedure. Diverging masking patterns were found for noise bandwidths greater than the width of an auditory filter. Despite different noise types having equal-pressure spectral-density levels (95 dB re 1 μPa(2)/Hz), masked detection threshold differences were as large as 22 dB. Consecutive experiments indicated that noise types with increased levels of amplitude modulation resulted in comodulation masking release due to within-channel and across-channel auditory mechanisms. The degree to which noise types were comodulated (comodulation index) was assessed by calculating the magnitude-squared coherence between the temporal envelope from an auditory filter centered on the signal and temporal envelopes from flanking filters. Statistical models indicate that masked thresholds in a variety of noise types, at a variety of levels, can be explained with metrics related to the comodulation index in addition to the pressure spectral-density level of noise. This study suggests that predicting auditory masking from ocean noise sources depends on both spectral and temporal properties of the noise. PMID:26610950

  10. Neural Correlates of an Auditory Afterimage in Primary Auditory Cortex

    OpenAIRE

    Noreña, A. J.; Eggermont, J. J.

    2003-01-01

    The Zwicker tone (ZT) is defined as an auditory negative afterimage, perceived after the presentation of an appropriate inducer. Typically, a notched noise (NN) with a notch width of 1/2 octave induces a ZT with a pitch falling in the frequency range of the notch. The aim of the present study was to find potential neural correlates of the ZT in the primary auditory cortex of ketamine-anesthetized cats. Responses of multiunits were recorded simultaneously with two 8-electrode arrays during 1 s...

  11. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  12. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  13. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue;

    2006-01-01

    BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize that it is the temp......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize...... that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...... of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p...

  14. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  15. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  16. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    ) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...

  17. Adaptation in the auditory system: an overview

    OpenAIRE

    David ePérez-González; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...

  18. Auditory-visual interaction: from fundamental research in cognitive psychology to (possible) applications

    Science.gov (United States)

    Kohlrausch, Armin; van de Par, Steven

    1999-05-01

    In our natural environment, we simultaneously receive information through various sensory modalities. The properties of these stimuli are coupled by physical laws, so that, e.g., auditory and visual stimuli caused by the same even have a fixed temporal relation when reaching the observer. In speech, for example, visible lip movements and audible utterances occur in close synchrony which contributes to the improvement of speech intelligibility under adverse acoustic conditions. Research into multi- sensory perception is currently being performed in a great variety of experimental contexts. This paper attempts to give an overview of the typical research areas dealing with audio-visual interaction and integration, bridging the range from cognitive psychology to applied research for multimedia applications. Issues of interest are the sensitivity to asynchrony between audio and video signals, the interaction between audio-visual stimuli with discrepant spatial and temporal rate information, crossmodal effects in attention, audio-visual interactions in speech perception and the combined perceived quality of audio-visual stimuli.

  19. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  20. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  1. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  2. Integration Of Spa Tio- Temporal Analysis Of Rainfall And Community Information System To Reduce Landslide Risk In Indonesia

    Directory of Open Access Journals (Sweden)

    Sudibyakto .

    2013-07-01

    Full Text Available Indonesia is vulnerable to many type of disasters including natural and anthropogenic disasters. Indonesian seasonal rainfall also shows inter annual variation. Sediment-related disaster such as landslide is the mostji-equent disaster occurred and significantly was impacted to natural, human. and social environment. Although. many disaster mitigation e.Oorts have been conducted to reduce disaster risk there are still urgently need to improve the early 1varning .\\~ystem by communicating the risk into local community. Integration qf spatialtemporal analysis qf rainfall and disaster management information !o~vstem would be required to improve the better disaster management in Indonesia. Application of Disaster A1anagement Information System in the study area will presented including evacuation map that used by the local community.

  3. Contribution of Temporal Processing Skills to Reading Comprehension in 8-Year-Olds: Evidence for a Mediation Effect of Phonological Awareness

    Science.gov (United States)

    Malenfant, Nathalie; Grondin, Simon; Boivin, Michel; Forget-Dubois, Nadine; Robaey, Philippe; Dionne, Ginette

    2012-01-01

    This study tested whether the association between temporal processing (TP) and reading is mediated by phonological awareness (PA) in a normative sample of 615 eight-year-olds. TP was measured with auditory and bimodal (visual-auditory) temporal order judgment tasks and PA with a phoneme deletion task. PA partially mediated the association between…

  4. Vocal Stereotypy in Children with Autism: Structural Characteristics, Variability, and Effects of Auditory Stimulation

    Science.gov (United States)

    Lanovaz, Marc J.; Sladeczek, Ingrid E.

    2011-01-01

    Two experiments were conducted to examine (a) the relationship between the structural characteristics (i.e., bout duration, inter-response time [IRT], pitch, and energy) and overall duration of vocal stereotypy, and (b) the effects of auditory stimulation on the duration and temporal structure of the behavior. In the first experiment, we measured…

  5. Speed on the dance floor: Auditory and visual cues for musical tempo.

    Science.gov (United States)

    London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri

    2016-02-01

    Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and

  6. Feedbacks between managed irrigation and water availability: Diagnosing temporal and spatial patterns using an integrated hydrologic model

    Science.gov (United States)

    Condon, Laura E.; Maxwell, Reed M.

    2014-03-01

    Groundwater-fed irrigation has been shown to deplete groundwater storage, decrease surface water runoff, and increase evapotranspiration. Here we simulate soil moisture-dependent groundwater-fed irrigation with an integrated hydrologic model. This allows for direct consideration of feedbacks between irrigation demand and groundwater depth. Special attention is paid to system dynamics in order to characterized spatial variability in irrigation demand and response to increased irrigation stress. A total of 80 years of simulation are completed for the Little Washita Basin in Southwestern Oklahoma, USA spanning a range of agricultural development scenarios and management practices. Results show regionally aggregated irrigation impacts consistent with other studies. However, here a spectral analysis reveals that groundwater-fed irrigation also amplifies the annual streamflow cycle while dampening longer-term cyclical behavior with increased irrigation during climatological dry periods. Feedbacks between the managed and natural system are clearly observed with respect to both irrigation demand and utilization when water table depths are within a critical range. Although the model domain is heterogeneous with respect to both surface and subsurface parameters, relationships between irrigation demand, water table depth, and irrigation utilization are consistent across space and between scenarios. Still, significant local heterogeneities are observed both with respect to transient behavior and response to stress. Spatial analysis of transient behavior shows that farms with groundwater depths within a critical depth range are most sensitive to management changes. Differences in behavior highlight the importance of groundwater's role in system dynamics in addition to water availability.

  7. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    Directory of Open Access Journals (Sweden)

    Lena-Vanessa eDollezal

    2014-06-01

    Full Text Available Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI. SAM tone parameters were chosen to evoke an integrated (1-stream, a segregated (2-stream or an ambiguous percept by adjusting the fmod difference between A and B tones (∆fmod. The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on ∆fmod between A and B SAM tones. The effect of ∆fmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of

  8. The effectiveness of imagery and sentence strategy instructions as a function of visual and auditory processing in young school-age children.

    Science.gov (United States)

    Weed, K; Ryan, E B

    1985-12-01

    The relationship between auditory and visual processing modality and strategy instructions was examined in first- and second-grade children. A Pictograph Sentence Memory Test was used to determine dominant processing modality as well as to assess instructional effects. The pictograph task was given first followed by auditory or visual interference. Children who were disrupted more by visual interference were classed as visual processors and those more disrupted by auditory interference were classed as auditory processors. Auditory and visual processors were then assigned to one of three conditions: interactive imagery strategy, sentence strategy, or a control group. Children in the imagery and sentence strategy groups were briefly taught to integrate the pictographs in order to remember them better. The sentence strategy was found to be effective for both auditory and visual processors, whereas the interactive imagery strategy was effective only for auditory processors.

  9. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  10. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  11. Synchronization and phonological skills: precise auditory timing hypothesis (PATH

    Directory of Open Access Journals (Sweden)

    Adam eTierney

    2014-11-01

    Full Text Available Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel 2011, 2012, 2014. There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The precise auditory timing hypothesis predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  12. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

    Science.gov (United States)

    Lee, HweeLing; Noppeney, Uta

    2011-08-01

    Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.

  13. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  14. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure.

    Science.gov (United States)

    Stacey, Paula C; Kitterick, Pádraig T; Morris, Saffron D; Sumner, Christian J

    2016-06-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  15. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    Science.gov (United States)

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  16. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  17. Animal models of spontaneous activity in the healthy and impaired auditory system

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    2015-04-01

    Full Text Available Spontaneous neural activity in the auditory nerve fibers and in auditory cortex in healthy animals is discussed with respect to the question: Is spontaneous activity noise or information carrier? The studies reviewed suggest strongly that spontaneous activity is a carrier of information. Subsequently, I review the numerous findings in the impaired auditory system, particularly with reference to noise trauma and tinnitus. Here the common assumption is that tinnitus reflects increased noise in the auditory system that among others affects temporal processing and interferes with the gap-startle reflex, which is frequently used as a behavioral assay for tinnitus. It is, however, more likely that the increased spontaneous activity in tinnitus, firing rate as well as neural synchrony, carries information that shapes the activity of downstream structures, including non-auditory ones, and leading to the tinnitus percept. The main drivers of that process are bursting and synchronous firing, which facilitates transfer of activity across synapses, and allows formation of auditory objects, such as tinnitus

  18. Variability and information content in auditory cortex spike trains during an interval-discrimination task.

    Science.gov (United States)

    Abolafia, Juan M; Martinez-Garcia, M; Deco, G; Sanchez-Vives, M V

    2013-11-01

    Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task. PMID:23945780

  19. Coding of auditory space

    OpenAIRE

    Konishi­, Masakazu

    2003-01-01

    Behavioral, anatomical, and physiological approaches can be integrated in the study of sound localization in barn owls. Space representation in owls provides a useful example for discussion of place and ensemble coding. Selectivity for space is broad and ambiguous in low-order neurons. Parallel pathways for binaural cues and for different frequency bands converge on high-order space-specific neurons, which encode space more precisely. An ensemble of broadly tuned place-coding neurons may conv...

  20. Generation of spike latency tuning by thalamocortical circuits in auditory cortex.

    Science.gov (United States)

    Zhou, Yi; Mesik, Lukas; Sun, Yujiao J; Liang, Feixue; Xiao, Zhongju; Tao, Huizhong W; Zhang, Li I

    2012-07-18

    In many sensory systems, the latency of spike responses of individual neurons is found to be tuned for stimulus features and proposed to be used as a coding strategy. Whether the spike latency tuning is simply relayed along sensory ascending pathways or generated by local circuits remains unclear. Here, in vivo whole-cell recordings from rat auditory cortical neurons in layer 4 revealed that the onset latency of their aggregate thalamic input exhibited nearly flat tuning for sound frequency, whereas their spike latency tuning was much sharper with a broadly expanded dynamic range. This suggests that the spike latency tuning is not simply inherited from the thalamus, but can be largely reconstructed by local circuits in the cortex. Dissecting of thalamocortical circuits and neural modeling further revealed that broadly tuned intracortical inhibition prolongs the integration time for spike generation preferentially at off-optimal frequencies, while sharply tuned intracortical excitation shortens it selectively at the optimal frequency. Such push and pull mechanisms mediated likely by feedforward excitatory and inhibitory inputs respectively greatly sharpen the spike latency tuning and expand its dynamic range. The modulation of integration time by thalamocortical-like circuits may represent an efficient strategy for converting information spatially coded in synaptic strength to temporal representation.

  1. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    Directory of Open Access Journals (Sweden)

    Ranganadh Narayanam

    2015-10-01

    Full Text Available The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the residual noise and improve the intelligibility of speech a psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise. This is a generalized time frequency subtraction algorithm which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. To calculate the bark spreading energy and temporal spreading energy the wavelet coefficients are used from which a time frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the discussed method. To increase the intelligibility of speech an unvoiced speech enhancement algorithm also integrated into the system.

  2. Mapping tropical forests and deciduous rubber plantations in Hainan Island, China by integrating PALSAR 25-m and multi-temporal Landsat images

    Science.gov (United States)

    Chen, Bangqian; Li, Xiangping; Xiao, Xiangming; Zhao, Bin; Dong, Jinwei; Kou, Weili; Qin, Yuanwei; Yang, Chuan; Wu, Zhixiang; Sun, Rui; Lan, Guoyu; Xie, Guishui

    2016-08-01

    Updated and accurate maps of tropical forests and industrial plantations, like rubber plantations, are essential for understanding carbon cycle and optimal forest management practices, but existing optical-imagery-based efforts are greatly limited by frequent cloud cover. Here we explored the potential utility of integrating 25-m cloud-free Phased Array type L-band Synthetic Aperture Radar (PALSAR) mosaic product and multi-temporal Landsat images to map forests and rubber plantations in Hainan Island, China. Based on structure information detected by PALSAR and yearly maximum Normalized Difference Vegetation Index (NDVI), we first identified and mapped forests with a producer accuracy (PA) of 96% and user accuracy (UA) of 98%. The resultant forest map showed reasonable spatial and areal agreements with the optical-based forest maps of Fine Resolution Observation and Monitoring Global Land Clover (FROM-GLC) and GlobalLand30. We then extracted rubber plantations from the forest map according to their deciduous features (using minimum Land Surface Water Index, LSWI) and rapid changes in canopies during Rubber Defoliation and Foliation (RDF) period (using standard deviation of LSWI) and dense canopy in growing season (using maximum NDVI). The rubber plantation map yielded a high accuracy when validated by ground truth-based data (PA/UA > 86%) and evaluated with three farm-scale rubber plantation maps (PA/UA >88%). It is estimated that in 2010, Hainan Island had 2.11 × 106 ha of forest and 5.15 × 105 ha of rubber plantations. This study has demonstrated the potential of integrating 25-m PALSAR-based structure information, and Landsat-based spectral and phenology information for mapping tropical forests and rubber plantations.

  3. Impact of language on functional connectivity for audiovisual speech integration

    Science.gov (United States)

    Shinozaki, Jun; Hiroe, Nobuo; Sato, Masa-aki; Nagamine, Takashi; Sekiyama, Kaoru

    2016-01-01

    Visual information about lip and facial movements plays a role in audiovisual (AV) speech perception. Although this has been widely confirmed, previous behavioural studies have shown interlanguage differences, that is, native Japanese speakers do not integrate auditory and visual speech as closely as native English speakers. To elucidate the neural basis of such interlanguage differences, 22 native English speakers and 24 native Japanese speakers were examined in behavioural or functional Magnetic Resonance Imaging (fMRI) experiments while mono-syllabic speech was presented under AV, auditory-only, or visual-only conditions for speech identification. Behavioural results indicated that the English speakers identified visual speech more quickly than the Japanese speakers, and that the temporal facilitation effect of congruent visual speech was significant in the English speakers but not in the Japanese speakers. Using fMRI data, we examined the functional connectivity among brain regions important for auditory-visual interplay. The results indicated that the English speakers had significantly stronger connectivity between the visual motion area MT and the Heschl’s gyrus compared with the Japanese speakers, which may subserve lower-level visual influences on speech perception in English speakers in a multisensory environment. These results suggested that linguistic experience strongly affects neural connectivity involved in AV speech integration. PMID:27510407

  4. Musical expertise induces audiovisual integration of abstract congruency rules.

    Science.gov (United States)

    Paraskevopoulos, Evangelos; Kuchenbuch, Anja; Herholz, Sibylle C; Pantev, Christo

    2012-12-12

    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to non-musicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses. PMID:23238733

  5. Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Jill B Firszt

    2013-12-01

    Full Text Available Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g. cochlear implants, less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and three and nine months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction, degree and type

  6. A theory of three-dimensional auditory perception

    Science.gov (United States)

    Saifuddin, Kazi

    2001-05-01

    A theory of auditory dimensions regarding temporal perception is proposed on the basis of the results found from a series of experiments conducted. In all experiments, relationships were investigated between the subjective judgments and the factors extracted from the autocorrelation function (ACF) of the auditory stimuli. The factors were changed by using different properties of pure-tone, complex-tone, white-noise and bandpass-noise stimuli. Experiments by paired-comparison method were conducted in the sound proof chamber except for one in a concert hall. Human subjects were asked to compare the durations of the two successive stimuli in the pair. Subjective durations were obtained in the psychometric function. Auditory stimuli were selected to use on the basis of the measured factors of ACF as parameters. Obtained results showed significant correlation between the factors of ACF and the subjective durations described well by the theory. The theory indicates loudness and pitch as two fundamental dimensions and whether the third one is the duration of the stimuli.

  7. Integrated remote sensing for multi-temporal analysis of anthropic activities in the south-east of Mt. Vesuvius National Park

    Science.gov (United States)

    Manzo, C.; Mei, A.; Fontinovo, G.; Allegrini, A.; Bassani, C.

    2016-10-01

    This work shows a downscaling approach for environmental changes study using multi- and hyper-spectral remote sensing data. The study area, located in the south-east of Mt. Vesuvius National Park, has been affected by two main activities during the last decades: mining and consecutive municipal solid waste dumping. These activities had an environmental impact in the neighbouring areas releasing dust and gaseous pollutants in the atmosphere and leachate into the ground. The approach integrated remote sensing data at different spectral and spatial resolutions. Landsat TM images were adopted to study the changes that occurred in the area using environmental indices at a wider temporal scale. In order to identify these indices in the study area, two high spatial and spectral resolution MIVIS aerial images were adopted. The first image, acquired in July 2004, describes the environmental situation after the anthropic activities of extraction and dumping in some sites, while the second image acquired in 2010 reflects the situation after the construction of new landfill in an old quarry. The spectral response of soil and vegetation was applied to interpret stress conditions and other environmental anomalies in the study areas. Some Warning Zones were defined by "core" and "neighbouring" of the anthropic area. Different classification methods were adopted in order to characterize the study area: Spectral Angle Mapper (SAM) classification provided local covers, while Linear Spectral Unmixing Analysis (LSMA) identified main fractions changes of vegetation, substrate and dark surfaces. The change detection of spectral indices, supported by thermal anomalies, highlighted potential stressed areas.

  8. Audiovocal Integration in Adults Who Stutter

    Science.gov (United States)

    Loucks, Torrey; Chon, HeeCheong; Han, Woojae

    2012-01-01

    Background: Altered auditory feedback can facilitate speech fluency in adults who stutter. However, other findings suggest that adults who stutter show anomalies in "audiovocal integration", such as longer phonation reaction times to auditory stimuli and less effective pitch tracking. Aims: To study audiovocal integration in adults who stutter…

  9. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    Science.gov (United States)

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may

  10. [External auditory canal osteoma resulting in cholesteatoma which is complicated with meningitis].

    Science.gov (United States)

    Yorgancılar, Ediz; Kınış, Vefa; Gün, Ramazan; Bakır, Salih; Ozbay, Musa; Topçu, Ismail

    2013-01-01

    Osteoma of external auditory canal is a unilateral benign tumor which usually presents with no symptoms. They only cause symptoms when cerumen collection or conduction type hearing loss occurs. They are the most common osseous lesions of the temporal bone. It very rarely presents with cholesteatoma. So far, no osteoma case concomitant with, cholesteatoma and meningitis has not been reported. In this article, we report an interesting case presenting with external auditory canal osteoma, cholestatoma and meningitis concomitantly who was treated successfully using the canal Wall-down mastoidectomy technique.

  11. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    Science.gov (United States)

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703

  12. Psychophysiological responses to auditory change.

    Science.gov (United States)

    Chuen, Lorraine; Sears, David; McAdams, Stephen

    2016-06-01

    A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. PMID:26927928

  13. Auditory distraction and serial memory

    OpenAIRE

    Jones, D M; Hughes, Rob; Macken, W.J.

    2010-01-01

    One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the seria...

  14. Reality of auditory verbal hallucinations

    OpenAIRE

    Raij TT; Valkonen-Korhonen M; Holi M; Therman S; Lehtonen J; Hari R

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation st...

  15. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  16. Auditory sequence analysis and phonological skill.

    Science.gov (United States)

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  17. Representations of temporal information in short-term memory: Are they modality-specific?

    Science.gov (United States)

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding.

  18. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  19. Early influence of auditory stimuli on upper-limb movements in young human infants: an overview

    Directory of Open Access Journals (Sweden)

    Priscilla Augusta Monteiro Ferronato

    2014-09-01

    Full Text Available Given that the auditory system is rather well developed at the end of the third trimester of pregnancy, it is likely that couplings between acoustics and motor activity can be integrated as early as at the beginning of postnatal life. The aim of the present mini-review was to summarize and discuss studies on early auditory-motor integration, focusing particularly on upper-limb movements (one of the most crucial means to interact with the environment in association with auditory stimuli, to develop further understanding of their significance with regard to early infant development. Many studies have investigated the relationship between various infant behaviors (e.g., sucking, visual fixation, head turning and auditory stimuli, and established that human infants can be observed displaying couplings between action and environmental sensory stimulation already from just after birth, clearly indicating a propensity for intentional behavior. Surprisingly few studies, however, have investigated the associations between upper-limb movements and different auditory stimuli in newborns and young infants, infants born at risk for developmental disorders/delays in particular. Findings from studies of early auditory-motor interaction support that the developing integration of sensory and motor systems is a fundamental part of the process guiding the development of goal-directed action in infancy, of great importance for continued motor, perceptual and cognitive development. At-risk infants (e.g., those born preterm may display increasing central auditory processing disorders, negatively affecting early sensory-motor integration, and resulting in long-term consequences on gesturing, language development and social communication. Consequently, there is a need for more studies on such implications

  20. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  1. Auditory stimulation and cardiac autonomic regulation

    OpenAIRE

    Vitor E Valenti; Guida, Heraldo L.; Frizzo, Ana C F; Cardoso, Ana C. V.; Vanderlei, Luiz Carlos M; Luiz Carlos de Abreu

    2012-01-01

    Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation bet...

  2. Mechanisms of Auditory Verbal Hallucination in Schizophrenia

    OpenAIRE

    Raymond eCho; Wayne eWu

    2013-01-01

    Recent work on the mechanisms underlying auditory verbal hallucination (AVH) has been heavily informed by self-monitoring accounts that postulate defects in an internal monitoring mechanism as the basis of AVH. A more neglected alternative is an account focusing on defects in auditory processing, namely a spontaneous activation account of auditory activity underlying AVH. Science is often aided by putting theories in competition. Accordingly, a discussion that systematically contrasts the two...

  3. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  4. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  5. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  6. Improving Hearing Performance Using Natural Auditory Coding Strategies

    Science.gov (United States)

    Rattay, Frank

    Sound transfer from the human ear to the brain is based on three quite different neural coding principles when the continuous temporal auditory source signal is sent as binary code in excellent quality via 30,000 nerve fibers per ear. Cochlear implants are well-accepted neural prostheses for people with sensory hearing loss, but currently the devices are inspired only by the tonotopic principle. According to this principle, every sound frequency is mapped to a specific place along the cochlea. By electrical stimulation, the frequency content of the acoustic signal is distributed via few contacts of the prosthesis to corresponding places and generates spikes there. In contrast to the natural situation, the artificially evoked information content in the auditory nerve is quite poor, especially because the richness of the temporal fine structure of the neural pattern is replaced by a firing pattern that is strongly synchronized with an artificial cycle duration. Improvement in hearing performance is expected by involving more of the ingenious strategies developed during evolution.

  7. Quantitative cerebral perfusion assessment using microscope-integrated analysis of intraoperative indocyanine green fluorescence angiography versus positron emission tomography in superficial temporal artery to middle cerebral artery anastomosis

    Directory of Open Access Journals (Sweden)

    Shinya Kobayashi

    2014-01-01

    Full Text Available Background: Intraoperative qualitative indocyanine green (ICG angiography has been used in cerebrovascular surgery. Hyperperfusion may lead to neurological complications after superficial temporal artery to middle cerebral artery (STA-MCA anastomosis. The purpose of this study is to quantitatively evaluate intraoperative cerebral perfusion using microscope-integrated dynamic ICG fluorescence analysis, and to assess whether this value predicts hyperperfusion syndrome (HPS after STA-MCA anastomosis. Methods: Ten patients undergoing STA-MCA anastomosis due to unilateral major cerebral artery occlusive disease were included. Ten patients with normal cerebral perfusion served as controls. The ICG transit curve from six regions of interest (ROIs on the cortex, corresponding to ROIs on positron emission tomography (PET study, was recorded. Maximum intensity (I MAX , cerebral blood flow index (CBFi, rise time (RT, and time to peak (TTP were evaluated. Results: RT/TTP, but not I MAX or CBFi, could differentiate between control and study subjects. RT/TTP correlated (|r| = 0.534-0.807; P < 0.01 with mean transit time (MTT/MTT ratio in the ipsilateral to contralateral hemisphere by PET study. Bland-Altman analysis showed a wide limit of agreement between RT and MTT and between TTP and MTT. The ratio of RT before and after bypass procedures was significantly lower in patients with postoperative HPS than in patients without postoperative HPS (0.60 ± 0.032 and 0.80 ± 0.056, respectively; P = 0.017. The ratio of TTP was also significantly lower in patients with postoperative HPS than in patients without postoperative HPS (0.64 ± 0.081 and 0.85 ± 0.095, respectively; P = 0.017. Conclusions: Time-dependent intraoperative parameters from the ICG transit curve provide quantitative information regarding cerebral circulation time with quality and utility comparable to information obtained by PET. These parameters may help predict the occurrence of postoperative

  8. Integration of spatial and temporal data for the definition of different landslide hazard scenarios in the area north of Lisbon (Portugal

    Directory of Open Access Journals (Sweden)

    J. L. Zêzere

    2004-01-01

    Full Text Available A general methodology for the probabilistic evaluation of landslide hazard is applied, taking in account both the landslide susceptibility and the instability triggering factors, mainly rainfall. The method is applied in the Fanhões-Trancão test site (north of Lisbon, Portugal where 100 shallow translational slides were mapped and integrated into a GIS database. For the landslide susceptibility assessment it is assumed that future landslides can be predicted by statistical relationships between past landslides and the spatial data set of the predisposing factors (slope angle, slope aspect, transversal slope profile, lithology, superficial deposits, geomorphology, and land use. Susceptibility is evaluated using algorithms based on statistical/probabilistic analysis (Bayesian model over unique-condition terrain units in a raster basis. The landslide susceptibility map is prepared by sorting all pixels according to the pixel susceptibility value in descending order. In order to validate the results of the susceptibility ana- lysis, the landslide data set is divided in two parts, using a temporal criterion. The first subset is used for obtaining a prediction image and the second subset is compared with the prediction results for validation. The obtained prediction-rate curve is used for the quantitative interpretation of the initial susceptibility map. Landslides in the study area are triggered by rainfall. The integration of triggering information in hazard assessment includes (i the definition of thresholds of rainfall (quantity-duration responsible for past landslide events; (ii the calculation of the relevant return periods; (iii the assumption that the same rainfall patterns (quantity/duration which produced slope instability in the past will produce the same effects in the future (i.e. same types of landslides and same total affected area. The landslide hazard is present as the probability of each pixel to be affected by a slope movement

  9. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  10. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  11. Loss of auditory sensitivity from inner hair cell synaptopathy can be centrally compensated in the young but not old brain.

    Science.gov (United States)

    Möhrle, Dorit; Ni, Kun; Varakina, Ksenya; Bing, Dan; Lee, Sze Chim; Zimmermann, Ulrike; Knipper, Marlies; Rüttiger, Lukas

    2016-08-01

    A dramatic shift in societal demographics will lead to rapid growth in the number of older people with hearing deficits. Poorer performance in suprathreshold speech understanding and temporal processing with age has been previously linked with progressing inner hair cell (IHC) synaptopathy that precedes age-dependent elevation of auditory thresholds. We compared central sound responsiveness after acoustic trauma in young, middle-aged, and older rats. We demonstrate that IHC synaptopathy progresses from middle age onward and hearing threshold becomes elevated from old age onward. Interestingly, middle-aged animals could centrally compensate for the loss of auditory fiber activity through an increase in late auditory brainstem responses (late auditory brainstem response wave) linked to shortening of central response latencies. In contrast, old animals failed to restore central responsiveness, which correlated with reduced temporal resolution in responding to amplitude changes. These findings may suggest that cochlear IHC synaptopathy with age does not necessarily induce temporal auditory coding deficits, as long as the capacity to generate neuronal gain maintains normal sound-induced central amplitudes. PMID:27318145

  12. Auditory hallucinations suppressed by etizolam in a patient with schizophrenia.

    Science.gov (United States)

    Benazzi, F; Mazzoli, M; Rossi, E

    1993-10-01

    A patient presented with a 15 year history of schizophrenia with auditory hallucinations. Though unresponsive to prolonged trials of neuroleptics, the auditory hallucinations disappeared with etizolam. PMID:7902201

  13. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  14. Project Temporalities

    DEFF Research Database (Denmark)

    Tryggestad, Kjell; Justesen, Lise; Mouritsen, Jan

    2013-01-01

    into account. This may require investments in new project management technologies. Originality/value – This paper adds to the literatures on project temporalities and stakeholder theory by connecting them to the question of non-human stakeholders and to project management technologies.......Purpose – The purpose of this paper is to explore how animals can become stakeholders in interaction with project management technologies and what happens with project temporalities when new and surprising stakeholders become part of a project and a recognized matter of concern to be taken...... into account. Design/methodology/approach – The paper is based on a qualitative case study of a project in the building industry. The authors use actor-network theory (ANT) to analyze the emergence of animal stakeholders, stakes and temporalities. Findings – The study shows how project temporalities can...

  15. Spatio-temporal Data Model Based on Relational Database System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this paper,the entity-relation data model for integrating spatio-temporal data is designed.In the design,spatio-temporal data can be effectively stored and spatiao-temporal analysis can be easily realized.

  16. Rapid context-based identification of target sounds in an auditory scene

    Science.gov (United States)

    Gamble, Marissa L.; Woldorff, Marty G.

    2015-01-01

    To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. While it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff (2014) recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning ~60 ms) between feature-deviating Target stimuli and physically equivalent feature-deviating Nontargets, reflecting a rapid Target-detection process. This was followed shortly later (~130 ms) by the lateralized N2ac ERP activation, reflecting the focusing of auditory spatial attention toward the Target sound and paralleling attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, Target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included Target and Nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the Target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for subjects with faster response times. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a Target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound. PMID:25848684

  17. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  18. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  19. Neuronal connectivity and interactions between the auditory and limbic systems. Effects of noise and tinnitus.

    Science.gov (United States)

    Kraus, Kari Suzanne; Canlon, Barbara

    2012-06-01

    Acoustic experience such as sound, noise, or absence of sound induces structural or functional changes in the central auditory system but can also affect limbic regions such as the amygdala and hippocampus. The amygdala is particularly sensitive to sound with valence or meaning, such as vocalizations, crying or music. The amygdala plays a central role in auditory fear conditioning, regulation of the acoustic startle response and can modulate auditory cortex plasticity. A stressful acoustic stimulus, such as noise, causes amygdala-mediated release of stress hormones via the HPA-axis, which may have negative effects on health, as well as on the central nervous system. On the contrary, short-term exposure to stress hormones elicits positive effects such as hearing protection. The hippocampus can affect auditory processing by adding a temporal dimension, as well as being able to mediate novelty detection via theta wave phase-locking. Noise exposure affects hippocampal neurogenesis and LTP in a manner that affects structural plasticity, learning and memory. Tinnitus, typically induced by hearing malfunctions, is associated with emotional stress, depression and anatomical changes of the hippocampus. In turn, the limbic system may play a role in the generation as well as the suppression of tinnitus indicating that the limbic system may be essential for tinnitus treatment. A further understanding of auditory-limbic interactions will contribute to future treatment strategies of tinnitus and noise trauma. PMID:22440225

  20. Dichotic auditory-verbal memory in adults with cerebro-vascular accident

    Directory of Open Access Journals (Sweden)

    Samaneh Yekta

    2014-01-01

    Full Text Available Background and Aim: Cerebrovascular accident is a neurological disorder involves central nervous system. Studies have shown that it affects the outputs of behavioral auditory tests such as dichotic auditory verbal memory test. The purpose of this study was to compare this memory test results between patients with cerebrovascular accident and normal subjects.Methods: This cross-sectional study was conducted on 20 patients with cerebrovascular accident aged 50-70 years and 20 controls matched for age and gender in Emam Khomeini Hospital, Tehran, Iran. Dichotic auditory verbal memory test was performed on each subject.Results: The mean score in the two groups was significantly different (p<0.0001. The results indicated that the right-ear score was significantly greater than the left-ear score in normal subjects (p<0.0001 and in patients with right hemisphere lesion (p<0.0001. The right-ear and left-ear scores were not significantly different in patients with left hemisphere lesion (p=0.0860.Conclusion: Among other methods, Dichotic auditory verbal memory test is a beneficial test in assessing the central auditory nervous system of patients with cerebrovascular accident. It seems that it is sensitive to the damages occur following temporal lobe strokes.

  1. Visual-auditory differences in duration discrimination of intervals in the subsecond and second range

    Directory of Open Access Journals (Sweden)

    Thomas eRammsayer

    2015-10-01

    Full Text Available A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration and longer (1000-ms standard duration intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the one-second range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the one-second range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.

  2. Poor supplementary motor area activation differentiates auditory verbal hallucination from imagining the hallucination.

    Science.gov (United States)

    Raij, Tuukka T; Riekki, Tapani J J

    2012-01-01

    Neuronal underpinnings of auditory verbal hallucination remain poorly understood. One suggested mechanism is brain activation that is similar to verbal imagery but occurs without the proper activation of the neuronal systems that are required to tag the origins of verbal imagery in one's mind. Such neuronal systems involve the supplementary motor area. The supplementary motor area has been associated with awareness of intention to make a hand movement, but whether this region is related to the sense of ownership of one's verbal thought remains poorly known. We hypothesized that the supplementary motor area is related to the distinction between one's own mental processing (auditory verbal imagery) and similar processing that is attributed to non-self author (auditory verbal hallucination). To test this hypothesis, we asked patients to signal the onset and offset of their auditory verbal hallucinations during functional magnetic resonance imaging. During non-hallucination periods, we asked the same patients to imagine the hallucination they had previously experienced. In addition, healthy control subjects signaled the onset and offset of self-paced imagery of similar voices. Both hallucinations and the imagery of hallucinations were associated with similar activation strengths of the fronto-temporal language-related circuitries, but the supplementary motor area was activated more strongly during the imagery than during hallucination. These findings suggest that auditory verbal hallucination resembles verbal imagery in language processing, but without the involvement of the supplementary motor area, which may subserve the sense of ownership of one's own verbal imagery. PMID:24179739

  3. Poor supplementary motor area activation differentiates auditory verbal hallucination from imagining the hallucination☆

    Science.gov (United States)

    Raij, Tuukka T.; Riekki, Tapani J.J.

    2012-01-01

    Neuronal underpinnings of auditory verbal hallucination remain poorly understood. One suggested mechanism is brain activation that is similar to verbal imagery but occurs without the proper activation of the neuronal systems that are required to tag the origins of verbal imagery in one's mind. Such neuronal systems involve the supplementary motor area. The supplementary motor area has been associated with awareness of intention to make a hand movement, but whether this region is related to the sense of ownership of one's verbal thought remains poorly known. We hypothesized that the supplementary motor area is related to the distinction between one's own mental processing (auditory verbal imagery) and similar processing that is attributed to non-self author (auditory verbal hallucination). To test this hypothesis, we asked patients to signal the onset and offset of their auditory verbal hallucinations during functional magnetic resonance imaging. During non-hallucination periods, we asked the same patients to imagine the hallucination they had previously experienced. In addition, healthy control subjects signaled the onset and offset of self-paced imagery of similar voices. Both hallucinations and the imagery of hallucinations were associated with similar activation strengths of the fronto-temporal language-related circuitries, but the supplementary motor area was activated more strongly during the imagery than during hallucination. These findings suggest that auditory verbal hallucination resembles verbal imagery in language processing, but without the involvement of the supplementary motor area, which may subserve the sense of ownership of one's own verbal imagery. PMID:24179739

  4. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  5. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  6. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  7. The peripheral auditory characteristics of noctuid moths: responses to the search-phase echolocation calls of bats

    Science.gov (United States)

    Waters; Jones

    1996-01-01

    The noctuid moths Agrotis segetum and Noctua pronuba show peak auditory sensitivity between 15 and 25 kHz, and a maximum sensitivity of 35 dB SPL. A. segetum shows a temporal integration time of 69 ms. It is predicted that bats using high-frequency and short-duration calls will be acoustically less apparent to these moths. Short-duration frequency-modulated (FM) calls of Plecotus auritus are not significantly less acoustically apparent than those of other FM bats with slightly longer call durations, based on their combined frequency and temporal structure alone. Long-duration, high-frequency, constant-frequency (CF) calls of Rhinolophus hipposideros at 113 kHz are significantly less apparent than those of the FM bats tested. The predicted low call apparency of the 83 kHz CF calls of R. ferrumequinum appears to be counteracted by their long duration. It is proposed that two separate mechanisms are exploited by bats to reduce their call apparency, low intensity in FM bats and high frequency in CF bats. Within the FM bats tested, shorter-duration calls do not significantly reduce the apparency of the call at the peripheral level, though they may limit the amount of information available to the central nervous system. PMID:9318627

  8. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  9. Temporal Characterization of Hydrates System Dynamics beneath Seafloor Mounds. Integrating Time-Lapse Electrical Resistivity Methods and In Situ Observations of Multiple Oceanographic Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Lutken, Carol [Univ. of Mississippi, Oxford, MS (United States); Macelloni, Leonardo [Univ. of Mississippi, Oxford, MS (United States); D' Emidio, Marco [Univ. of Mississippi, Oxford, MS (United States); Dunbar, John [Univ. of Mississippi, Oxford, MS (United States); Higley, Paul [Univ. of Mississippi, Oxford, MS (United States)

    2015-01-31

    This study was designed to investigate temporal variations in hydrate system dynamics by measuring changes in volumes of hydrate beneath hydrate-bearing mounds on the continental slope of the northern Gulf of Mexico, the landward extreme of hydrate occurrence in this region. Direct Current Resistivity (DCR) measurements were made contemporaneously with measurements of oceanographic parameters at Woolsey Mound, a carbonate-hydrate complex on the mid-continental slope, where formation and dissociation of hydrates are most vulnerable to variations in oceanographic parameters affected by climate change, and where changes in hydrate stability can readily translate to loss of seafloor stability, impacts to benthic ecosystems, and venting of greenhouse gases to the water-column, and eventually, the atmosphere. We focused our study on hydrate within seafloor mounds because the structurally-focused methane flux at these sites likely causes hydrate formation and dissociation processes to occur at higher rates than at sites where the methane flux is less concentrated and we wanted to maximize our chances of witnessing association/dissociation of hydrates. We selected a particularly well-studied hydrate-bearing seafloor mound near the landward extent of the hydrate stability zone, Woolsey Mound (MC118). This mid-slope site has been studied extensively and the project was able to leverage considerable resources from the team’s research experience at MC118. The site exhibits seafloor features associated with gas expulsion, hydrates have been documented at the seafloor, and changes in the outcropping hydrates have been documented, photographically, to have occurred over a period of months. We conducted observatory-based, in situ measurements to 1) characterize, geophysically, the sub-bottom distribution of hydrate and its temporal variability, and 2) contemporaneously record relevant environmental parameters (temperature, pressure, salinity, turbidity, bottom currents) to

  10. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  11. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  12. Effects of scanner acoustic noise on intrinsic brain activity during auditory stimulation

    Energy Technology Data Exchange (ETDEWEB)

    Yakunina, Natalia [Kangwon National University, Institute of Medical Science, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kang, Eun Kyoung [Kangwon National University Hospital, Department of Rehabilitation Medicine, Chuncheon (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of); Min, Ji-Hoon [University of Michigan, Department of Biopsychology, Cognition, and Neuroscience, Ann Arbor, MI (United States); Kim, Sam Soo [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Radiology, Chuncheon (Korea, Republic of); Nam, Eui-Cheol [Kangwon National University Hospital, Neuroscience Research Institute, Chuncheon (Korea, Republic of); Kangwon National University, School of Medicine, Department of Otolaryngology, Chuncheon (Korea, Republic of)

    2015-10-15

    Although the effects of scanner background noise (SBN) during functional magnetic resonance imaging (fMRI) have been extensively investigated for the brain regions involved in auditory processing, its impact on other types of intrinsic brain activity has largely been neglected. The present study evaluated the influence of SBN on a number of intrinsic connectivity networks (ICNs) during auditory stimulation by comparing the results obtained using sparse temporal acquisition (STA) with those using continuous acquisition (CA). Fourteen healthy subjects were presented with classical music pieces in a block paradigm during two sessions of STA and CA. A volume-matched CA dataset (CAm) was generated by subsampling the CA dataset to temporally match it with the STA data. Independent component analysis was performed on the concatenated STA-CAm datasets, and voxel data, time courses, power spectra, and functional connectivity were compared. The ICA revealed 19 ICNs; the auditory, default mode, salience, and frontoparietal networks showed greater activity in the STA. The spectral peaks in 17 networks corresponded to the stimulation cycles in the STA, while only five networks displayed this correspondence in the CA. The dorsal default mode and salience networks exhibited stronger correlations with the stimulus waveform in the STA. SBN appeared to influence not only the areas of auditory response but also the majority of other ICNs, including attention and sensory networks. Therefore, SBN should be regarded as a serious nuisance factor during fMRI studies investigating intrinsic brain activity under external stimulation or task loads. (orig.)

  13. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587

  14. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  15. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  16. Auditory Neural Prostheses – A Window to the Future

    Directory of Open Access Journals (Sweden)

    Mohan Kameshwaran

    2015-06-01

    Full Text Available Hearing loss is one of the commonest congenital anomalies to affect children world-over. The incidence of congenital hearing loss is more pronounced in developing countries like the Indian sub-continent, especially with the problems of consanguinity. Hearing loss is a double tragedy, as it leads to not only deafness but also language deprivation. However, hearing loss is the only truly remediable handicap, due to remarkable advances in biomedical engineering and surgical techniques. Auditory neural prostheses help to augment or restore hearing by integration of an external circuitry with the peripheral hearing apparatus and the central circuitry of the brain. A cochlear implant (CI is a surgically implantable device that helps restore hearing in patients with severe-profound hearing loss, unresponsive to amplification by conventional hearing aids. CIs are electronic devices designed to detect mechanical sound energy and convert it into electrical signals that can be delivered to the coch­lear nerve, bypassing the damaged hair cells of the coch­lea. The only true prerequisite is an intact auditory nerve. The emphasis is on implantation as early as possible to maximize speech understanding and perception. Bilateral CI has significant benefits which include improved speech perception in noisy environments and improved sound localization. Presently, the indications for CI have widened and these expanded indications for implantation are related to age, additional handicaps, residual hearing, and special etiologies of deafness. Combined electric and acoustic stimulation (EAS / hybrid device is designed for individuals with binaural low-frequency hearing and severe-to-profound high-frequency hearing loss. Auditory brainstem implantation (ABI is a safe and effective means of hearing rehabilitation in patients with retrocochlear disorders, such as neurofibromatosis type 2 (NF2 or congenital cochlear nerve aplasia, wherein the cochlear nerve is damaged

  17. Dynamics of Electrocorticographic (ECoG) Activity in Human Temporal and Frontal Cortical Areas During Music Listening

    Science.gov (United States)

    Potes, Cristhian; Gunduz, Aysegul; Brunner, Peter; Schalk, Gerwin

    2012-01-01

    Previous studies demonstrated that brain signals encode information about specific features of simple auditory stimuli or of general aspects of natural auditory stimuli. How brain signals represent the time course of specific features in natural auditory stimuli is not well understood. In this study, we show in eight human subjects that signals recorded from the surface of the brain (electrocorticography (ECoG)) encode information about the sound intensity of music. ECoG activity in the high gamma band recorded from the posterior part of the superior temporal gyrus as well as from an isolated area in the precentral gyrus were observed to be highly correlated with the sound intensity of music. These results not only confirm the role of auditory cortices in auditory processing but also point to an important role of premotor and motor cortices. They also encourage the use of ECoG activity to study more complex acoustic features of simple or natural auditory stimuli. PMID:22537600

  18. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Tollkötter Melanie

    2006-08-01

    Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.

  19. Persistent fluctuations in stride intervals under fractal auditory stimulation.

    Science.gov (United States)

    Marmelat, Vivien; Torre, Kjerstin; Beek, Peter J; Daffertshofer, Andreas

    2014-01-01

    Stride sequences of healthy gait are