WorldWideScience

Sample records for auditory temporal integration

  1. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    , much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...

  2. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  3. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  4. Auditory temporal processes in the elderly

    Directory of Open Access Journals (Sweden)

    E. Ben-Artzi

    2011-03-01

    Full Text Available Several studies have reported age-related decline in auditory temporal resolution and in working memory. However, earlier studies did not provide evidence as to whether these declines reflect overall changes in the same mechanisms, or reflect age-related changes in two independent mechanisms. In the current study we examined whether the age-related decline in auditory temporal resolution and in working memory would remain significant even after controlling for their shared variance. Eighty-two participants, aged 21-82 performed the dichotic temporal order judgment task and the backward digit span task. The findings indicate that age-related decline in auditory temporal resolution and in working memory are two independent processes.

  5. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  6. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  7. Altered Auditory and Multisensory Temporal Processing in Autism Spectrum Disorders

    Science.gov (United States)

    Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Cascio, Carissa J.; Stone, Wendy L.; Wallace, Mark T.

    2011-01-01

    Autism spectrum disorders (ASD) are characterized by deficits in social reciprocity and communication, as well as by repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli also have been reported frequently. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically developing (TD) children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ) tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies. PMID:21258617

  8. Altered auditory and multisensory temporal processing in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Leslie D Kwakye

    2011-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by deficits in social reciprocity and communication, as well as repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli have also frequently been reported. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically-developing (TD children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies.

  9. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  10. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  11. Neuromagnetic mismatch field (MMF) dependence on the auditory temporal integration window and the existence of categorical boundaries: comparisons between dissyllabic words and their equivalent tones.

    Science.gov (United States)

    Inouchi, Mayako; Kubota, Mikio; Ohta, Katsuya; Matsushima, Eisuke; Ferrari, Paul; Scovel, Thomas

    2008-09-26

    Previous duration-related auditory mismatch response studies have tested vowels, words, and tones. Recently, the elicitation of strong neuromagnetic mismatch field (MMF) components in response to large (>32%) vowel-duration decrements was clearly observed within dissyllabic words. To date, however, the issues of whether this MMF duration-decrement effect also extends to duration increments, and to what degree these duration decrements and increments are attributed to their corresponding non-speech acoustic properties remainto be resolved. Accordingly, this magnetoencephalographic (MEG) study investigated whether prominent MMF components would be evoked by both duration decrements and increments for dissyllabic word stimuli as well as frequency-band matched tones in order to corroborate the relation between the MMF elicitation and the directions of duration changes in speech and non-speech. Further, the peak latency effectsdepending on stimulus types (words vs. tones) were examined. MEG responses were recorded with a whole-head 148-channel magnetometer, while subjects passively listened to the stimuli presented within an odd-ball paradigm for both shortened duration (180-->100%) and lengthened duration (100-->180%). Prominent MMF components were observed in the shortened and lengthened paradigms for the word stimuli, but only in the shortened paradigm for tones. The MMF peak latency results showed that the words ledtoearlier peak latencies than the tones. These findings suggest that duration lengthening as well as shortening in words produces a salient acoustic MMF response when the divergent point between the long and short durations fallswithin the temporal window ofauditory integration post sound onset (<200 ms), and that theearlier latency of the dissyllabic word stimuli over tones is due to a prominent syllable structure in words which is used to generate temporal categorical boundaries.

  12. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  13. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  14. Effects of Physical Rehabilitation Integrated with Rhythmic Auditory Stimulation on Spatio-Temporal and Kinematic Parameters of Gait in Parkinson's Disease.

    Science.gov (United States)

    Pau, Massimiliano; Corona, Federica; Pili, Roberta; Casula, Carlo; Sors, Fabrizio; Agostini, Tiziano; Cossu, Giovanni; Guicciardi, Marco; Murgia, Mauro

    2016-01-01

    Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson's disease (PD). In this context, the use of rhythmic auditory stimulation (RAS) has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns, scarce information is available from a kinematic viewpoint. In this study, we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of supervised rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4 ± 11.1, Hoehn and Yahr 1-3). Gait kinematics was assessed before and at the end of the rehabilitation period and after a 3-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively), which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion-extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  15. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia.

  16. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  17. Auditory Temporal Resolution in Individuals with Diabetes Mellitus Type 2

    OpenAIRE

    2016-01-01

    Introduction “Diabetes mellitus is a group of metabolic disorders characterized by elevated blood sugar and abnormalities in insulin secretion and action” (American Diabetes Association). Previous literature has reported connection between diabetes mellitus and hearing impairment. There is a dearth of literature on auditory temporal resolution ability in individuals with diabetes mellitus type 2. Objective The main objective of the present study was to assess auditory temporal resolution a...

  18. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  19. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  20. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  1. Subcortical neural coding mechanisms for auditory temporal processing.

    Science.gov (United States)

    Frisina, R D

    2001-08-01

    Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with

  2. Temporal resolution in the hearing system and auditory evoked potentials

    DEFF Research Database (Denmark)

    Miller, Lee; Beedholm, Kristian

    2008-01-01

    3pAB5. Temporal resolution in the hearing system and auditory evoked potentials. Kristian Beedholm Institute of Biology,University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark, beedholm@mail.dk, Lee A. Miller Institute of Biology,University of Southern Denmark, Campusvej 55, 5230...... Odense M, Denmark, lee@biology.sdu.dkA popular type of investigation with auditory evoked potentials AEP consists of mapping the dependency of the envelope followingresponse to the AM frequency. This results in what is called the modulation rate transfer function MRTF. The physiologicalinterpretation...... of the MRTF is not straight forward, but is often used as a measure of the ability of the auditory system to encodetemporal changes. It is, however, shown here that the MRTF must depend on the waveform of the click-evoked AEP ceAEP, whichdoes not relate directly to temporal resolution. The theoretical...

  3. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  4. Auditory Integration Training: The Magical Mystery Cure.

    Science.gov (United States)

    Tharpe, Anne Marie

    1999-01-01

    This article notes the enthusiastic reception received by auditory integration training (AIT) for children with a wide variety of disorders including autism but raises concerns about this alternative treatment practice. It offers reasons for cautious evaluation of AIT prior to clinical implementation and summarizes current research findings. (DB)

  5. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  6. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    Science.gov (United States)

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing.

  7. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  8. Temporal coding by populations of auditory receptor neurons.

    Science.gov (United States)

    Sabourin, Patrick; Pollack, Gerald S

    2010-03-01

    Auditory receptor neurons of crickets are most sensitive to either low or high sound frequencies. Earlier work showed that the temporal coding properties of first-order auditory interneurons are matched to the temporal characteristics of natural low- and high-frequency stimuli (cricket songs and bat echolocation calls, respectively). We studied the temporal coding properties of receptor neurons and used modeling to investigate how activity within populations of low- and high-frequency receptors might contribute to the coding properties of interneurons. We confirm earlier findings that individual low-frequency-tuned receptors code stimulus temporal pattern poorly, but show that coding performance of a receptor population increases markedly with population size, due in part to low redundancy among the spike trains of different receptors. By contrast, individual high-frequency-tuned receptors code a stimulus temporal pattern fairly well and, because their spike trains are redundant, there is only a slight increase in coding performance with population size. The coding properties of low- and high-frequency receptor populations resemble those of interneurons in response to low- and high-frequency stimuli, suggesting that coding at the interneuron level is partly determined by the nature and organization of afferent input. Consistent with this, the sound-frequency-specific coding properties of an interneuron, previously demonstrated by analyzing its spike train, are also apparent in the subthreshold fluctuations in membrane potential that are generated by synaptic input from receptor neurons.

  9. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...

  10. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  11. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  12. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  13. Repetition suppression in auditory-motor regions to pitch and temporal structure in music.

    Science.gov (United States)

    Brown, Rachel M; Chen, Joyce L; Hollinger, Avrum; Penhune, Virginia B; Palmer, Caroline; Zatorre, Robert J

    2013-02-01

    Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.

  14. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  15. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    Science.gov (United States)

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  16. A Pilot Study of Auditory Integration Training in Autism.

    Science.gov (United States)

    Rimland, Bernard; Edelson, Stephen M.

    1995-01-01

    The effectiveness of Auditory Integration Training (AIT) in 8 autistic individuals (ages 4-21) was evaluated using repeated multiple criteria assessment over a 3-month period. Compared to matched controls, subjects' scores improved on the Aberrant Behavior Checklist and Fisher's Auditory Problems Checklist. AIT did not decrease sound sensitivity.…

  17. Spectral features control temporal plasticity in auditory cortex.

    Science.gov (United States)

    Kilgard, M P; Pandya, P K; Vazquez, J L; Rathbun, D L; Engineer, N D; Moucha, R

    2001-01-01

    Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.

  18. Changes across time in the temporal responses of auditory nerve fibers stimulated by electric pulse trains.

    Science.gov (United States)

    Miller, Charles A; Hu, Ning; Zhang, Fawen; Robinson, Barbara K; Abbas, Paul J

    2008-03-01

    Most auditory prostheses use modulated electric pulse trains to excite the auditory nerve. There are, however, scant data regarding the effects of pulse trains on auditory nerve fiber (ANF) responses across the duration of such stimuli. We examined how temporal ANF properties changed with level and pulse rate across 300-ms pulse trains. Four measures were examined: (1) first-spike latency, (2) interspike interval (ISI), (3) vector strength (VS), and (4) Fano factor (FF, an index of the temporal variability of responsiveness). Data were obtained using 250-, 1,000-, and 5,000-pulse/s stimuli. First-spike latency decreased with increasing spike rate, with relatively small decrements observed for 5,000-pulse/s trains, presumably reflecting integration. ISIs to low-rate (250 pulse/s) trains were strongly locked to the stimuli, whereas ISIs evoked with 5,000-pulse/s trains were dominated by refractory and adaptation effects. Across time, VS decreased for low-rate trains but not for 5,000-pulse/s stimuli. At relatively high spike rates (>200 spike/s), VS values for 5,000-pulse/s trains were lower than those obtained with 250-pulse/s stimuli (even after accounting for the smaller periods of the 5,000-pulse/s stimuli), indicating a desynchronizing effect of high-rate stimuli. FF measures also indicated a desynchronizing effect of high-rate trains. Across a wide range of response rates, FF underwent relatively fast increases (i.e., within 100 ms) for 5,000-pulse/s stimuli. With a few exceptions, ISI, VS, and FF measures approached asymptotic values within the 300-ms duration of the low- and high-rate trains. These findings may have implications for designs of cochlear implant stimulus protocols, understanding electrically evoked compound action potentials, and interpretation of neural measures obtained at central nuclei, which depend on understanding the output of the auditory nerve.

  19. Carrier-dependent temporal processing in an auditory interneuron.

    Science.gov (United States)

    Sabourin, Patrick; Gottlieb, Heather; Pollack, Gerald S

    2008-05-01

    Signal processing in the auditory interneuron Omega Neuron 1 (ON1) of the cricket Teleogryllus oceanicus was compared at high- and low-carrier frequencies in three different experimental paradigms. First, integration time, which corresponds to the time it takes for a neuron to reach threshold when stimulated at the minimum effective intensity, was found to be significantly shorter at high-carrier frequency than at low-carrier frequency. Second, phase locking to sinusoidally amplitude modulated signals was more efficient at high frequency, especially at high modulation rates and low modulation depths. Finally, we examined the efficiency with which ON1 detects gaps in a constant tone. As reflected by the decrease in firing rate in the vicinity of the gap, ON1 is better at detecting gaps at low-carrier frequency. Following a gap, firing rate increases beyond the pre-gap level. This "rebound" phenomenon is similar for low- and high-carrier frequencies.

  20. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  1. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  2. Lengthened temporal integration in schizophrenia.

    Science.gov (United States)

    Parsons, Brent D; Gandhi, Shilpa; Aurbach, Elyse L; Williams, Nina; Williams, Micah; Wassef, Adel; Eagleman, David M

    2013-01-01

    Research in schizophrenia has tended to emphasize deficits in higher cognitive abilities, such as attention, memory, and executive function. Here we provide evidence for dysfunction at a more fundamental level of perceptual processing, temporal integration. On a measure of flicker fusion, patients with schizophrenia exhibited significantly lower thresholds than age and education matched healthy controls. We reasoned that this finding could result from a longer window of temporal integration or could reflect diminished repetition suppression: if every frame of the repeating stimulus were represented as novel, its perceived duration would be accordingly longer. To tease apart these non-exclusive hypotheses, we asked patients to report the number of stimuli perceived on the screen at once (numerosity) as they watched rapidly flashing stimuli that were either repeated or novel. Patients reported significantly higher numerosity than controls in all conditions, again indicating a longer window of temporal integration in schizophrenia. Further, patients showed the largest difference from controls in the repeated condition, suggesting a possible effect of weaker repetition suppression. Finally, we establish that our findings generalize to several different classes of stimuli (letters, pictures, faces, words, and pseudo-words), demonstrating a non-specific effect of a lengthened window of integration. We conclude that the visual system in schizophrenics integrates input over longer periods of time, and that repetition suppression may also be deficient. We suggest that these abnormalities in the processing of temporal information may underlie higher-level deficits in schizophrenia and account for the disturbed sense of continuity and fragmentation of events in time reported by patients.

  3. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery.

  4. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  5. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  6. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    Science.gov (United States)

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  7. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    Science.gov (United States)

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  8. Temporal feature integration for music genre classification

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan;

    2007-01-01

    Temporal feature integration is the process of combining all the feature vectors in a time window into a single feature vector in order to capture the relevant temporal information in the window. The mean and variance along the temporal dimension are often used for temporal feature integration......) and multivariate autoregressive (MAR) features which are compared against the baseline mean-variance as well as two other temporal feature integration techniques. Reproducibility in performance ranking of temporal feature integration methods were demonstrated using two data sets with five and eleven music genres...

  9. Temporal pattern of acoustic imaging noise asymmetrically modulates activation in the auditory cortex.

    Science.gov (United States)

    Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M

    2016-01-01

    This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation.

  10. Temporal feature integration for music genre classification

    OpenAIRE

    Meng, Anders; Ahrendt, Peter; Larsen, Jan; Hansen, Lars Kai

    2007-01-01

    Temporal feature integration is the process of combining all the feature vectors in a time window into a single feature vector in order to capture the relevant temporal information in the window. The mean and variance along the temporal dimension are often used for temporal feature integration, but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music ge...

  11. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  12. Temporal pattern recognition based on instantaneous spike rate coding in a simple auditory system.

    Science.gov (United States)

    Nabatiyan, A; Poulet, J F A; de Polavieja, G G; Hedwig, B

    2003-10-01

    Auditory pattern recognition by the CNS is a fundamental process in acoustic communication. Because crickets communicate with stereotyped patterns of constant frequency syllables, they are established models to investigate the neuronal mechanisms of auditory pattern recognition. Here we provide evidence that for the neural processing of amplitude-modulated sounds, the instantaneous spike rate rather than the time-averaged neural activity is the appropriate coding principle by comparing both coding parameters in a thoracic interneuron (Omega neuron ON1) of the cricket (Gryllus bimaculatus) auditory system. When stimulated with different temporal sound patterns, the analysis of the instantaneous spike rate demonstrates that the neuron acts as a low-pass filter for syllable patterns. The instantaneous spike rate is low at high syllable rates, but prominent peaks in the instantaneous spike rate are generated as the syllable rate resembles that of the species-specific pattern. The occurrence and repetition rate of these peaks in the neuronal discharge are sufficient to explain temporal filtering in the cricket auditory pathway as they closely match the tuning of phonotactic behavior to different sound patterns. Thus temporal filtering or "pattern recognition" occurs at an early stage in the auditory pathway.

  13. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  14. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  15. Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia

    Science.gov (United States)

    Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.

    2003-01-01

    The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…

  16. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  17. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of aud...

  18. Spectro-Temporal Methods in Primary Auditory Cortex

    Science.gov (United States)

    2006-01-01

    LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE... Funcion Spike-triggered averaging of the spectro-temporal envelope directly gives a similar spectro-temporal response field to the spike- triggered

  19. Temporal Feature Integration for Music Organisation

    OpenAIRE

    Meng, Anders; Larsen, Jan; Hansen, Lars Kai

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisa...

  20. An auditory illusion of infinite tempo change based on multiple temporal levels.

    Directory of Open Access Journals (Sweden)

    Guy Madison

    Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.

  1. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  2. Multisensory temporal integration: Task and stimulus dependencies

    OpenAIRE

    Stevenson, Ryan A.; Wallace, Mark T.

    2013-01-01

    The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the i...

  3. Altered temporal dynamics of neural adaptation in the aging human auditory cortex.

    Science.gov (United States)

    Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas

    2016-09-01

    Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.

  4. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging.

  5. Resolução temporal auditiva em idosos Auditory temporal resolution in elderly people

    Directory of Open Access Journals (Sweden)

    Flávia Duarte Liporaci

    2010-12-01

    Full Text Available OBJETIVO: Avaliar o processamento auditivo em idosos por meio do teste de resolução temporal Gaps in Noise e verificar se a presença de perda auditiva influencia no desempenho nesse teste. MÉTODOS: Sessenta e cinco ouvintes idosos, entre 60 e 79 anos, foram avaliados por meio do teste Gaps In Noise. Para seleção da amostra foram realizados: anamnese, mini-exame do estado mental e avaliação audiológica básica. Os participantes foram alocados e estudados em um grupo único e posteriormente divididos em três grupos segundo os resultados audiométricos nas frequências de 500 Hz, 1, 2, 3, 4 e 6 kHz. Assim, classificou-se o G1 com audição normal, o G2 com perda auditiva de grau leve e o G3 com perda auditiva de grau moderado. RESULTADOS: Em toda a amostra, as médias de limiar de detecção de gap e de porcentagem de acertos foram de 8,1 ms e 52,6% para a orelha direita e de 8,2 ms e 52,2% para a orelha esquerda. No G1, estas medidas foram de 7,3 ms e 57,6% para a orelha direita e de 7,7 ms e 55,8% para a orelha esquerda. No G2, estas medidas foram de 8,2 ms e 52,5% para a orelha direita e de 7,9 ms e 53,2% para a orelha esquerda. No G3, estas medidas foram de 9,2 ms e 45,2% para as orelhas direita e esquerda. CONCLUSÃO: A presença de perda auditiva elevou os limiares de detecção de gap e diminuiu a porcentagem de acertos no teste Gaps In Noise.PURPOSE: To assess the auditory processing of elderly patients using the temporal resolution Gaps-in-Noise test, and to verify if the presence of hearing loss influences the performance on this test. METHODS: Sixty-five elderly listeners, with ages between 60 and 79 years, were assessed with the Gaps-in-Noise test. To meet the inclusion criteria, the following procedures were carried out: anamnesis, mini-mental state examination, and basic audiological evaluation. The participants were allocated and studied as a group, and then were divided into three groups, according to audiometric results

  6. Temporal Feature Integration for Music Organisation

    DEFF Research Database (Denmark)

    Meng, Anders

    2006-01-01

    ranking' approach is proposed for ranking the short-time features at larger time-scales according to their discriminative power in a music genre classification task. The multivariate AR (MAR) model has been proposed for temporal feature integration. It effectively models local dynamical structure......This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods...... for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisation. Human evaluations of these, have been obtained to access the subjectivity on the datasets. Temporal...

  7. The Effect of Temporal Context on the Sustained Pitch Response in Human Auditory Cortex

    OpenAIRE

    Gutschalk, Alexander; Patterson, Roy D.; Scherg, Michael; Uppenkamp, Stefan; Rupp, André

    2006-01-01

    Recent neuroimaging studies have shown that activity in lateral Heschl’s gyrus covaries specifically with the strength of musical pitch. Pitch strength is important for the perceptual distinctiveness of an acoustic event, but in complex auditory scenes, the distinctiveness of an event also depends on its context. In this magnetoencephalography study, we evaluate how temporal context influences the sustained pitch response (SPR) in lateral Heschl’s gyrus. In 2 sequences of continuously alterna...

  8. The impact of a concurrent motor task on auditory and visual temporal discrimination tasks.

    Science.gov (United States)

    Mioni, Giovanna; Grassi, Massimo; Tarantino, Vincenza; Stablum, Franca; Grondin, Simon; Bisiacchi, Patrizia S

    2016-04-01

    Previous studies have shown the presence of an interference effect on temporal perception when participants are required to simultaneously execute a nontemporal task. Such interference likely has an attentional source. In the present work, a temporal discrimination task was performed alone or together with a self-paced finger-tapping task used as concurrent, nontemporal task. Temporal durations were presented in either the visual or the auditory modality, and two standard durations (500 and 1,500 ms) were used. For each experimental condition, the participant's threshold was estimated and analyzed. The mean Weber fraction was higher in the visual than in the auditory modality, but only for the subsecond duration, and it was higher with the 500-ms than with the 1,500-ms standard duration. Interestingly, the Weber fraction was significantly higher in the dual-task condition, but only in the visual modality. The results suggest that the processing of time in the auditory modality is likely automatic, but not in the visual modality.

  9. Pairing tone trains with vagus nerve stimulation induces temporal plasticity in auditory cortex.

    Science.gov (United States)

    Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P

    2012-01-01

    The selectivity of neurons in sensory cortex can be modified by pairing neuromodulator release with sensory stimulation. Repeated pairing of electrical stimulation of the cholinergic nucleus basalis, for example, induces input specific plasticity in primary auditory cortex (A1). Pairing nucleus basalis stimulation (NBS) with a tone increases the number of A1 neurons that respond to the paired tone frequency. Pairing NBS with fast or slow tone trains can respectively increase or decrease the ability of A1 neurons to respond to rapidly presented tones. Pairing vagus nerve stimulation (VNS) with a single tone alters spectral tuning in the same way as NBS-tone pairing without the need for brain surgery. In this study, we tested whether pairing VNS with tone trains can change the temporal response properties of A1 neurons. In naïve rats, A1 neurons respond strongly to tones repeated at rates up to 10 pulses per second (pps). Repeatedly pairing VNS with 15 pps tone trains increased the temporal following capacity of A1 neurons and repeatedly pairing VNS with 5 pps tone trains decreased the temporal following capacity of A1 neurons. Pairing VNS with tone trains did not alter the frequency selectivity or tonotopic organization of auditory cortex neurons. Since VNS is well tolerated by patients, VNS-tone train pairing represents a viable method to direct temporal plasticity in a variety of human conditions associated with temporal processing deficits.

  10. Matching Pursuit Analysis of Auditory Receptive Fields' Spectro-Temporal Properties

    Science.gov (United States)

    Bach, Jörg-Hendrik; Kollmeier, Birger; Anemüller, Jörn

    2017-01-01

    Gabor filters have long been proposed as models for spectro-temporal receptive fields (STRFs), with their specific spectral and temporal rate of modulation qualitatively replicating characteristics of STRF filters estimated from responses to auditory stimuli in physiological data. The present study builds on the Gabor-STRF model by proposing a methodology to quantitatively decompose STRFs into a set of optimally matched Gabor filters through matching pursuit, and by quantitatively evaluating spectral and temporal characteristics of STRFs in terms of the derived optimal Gabor-parameters. To summarize a neuron's spectro-temporal characteristics, we introduce a measure for the “diagonality,” i.e., the extent to which an STRF exhibits spectro-temporal transients which cannot be factorized into a product of a spectral and a temporal modulation. With this methodology, it is shown that approximately half of 52 analyzed zebra finch STRFs can each be well approximated by a single Gabor or a linear combination of two Gabor filters. Moreover, the dominant Gabor functions tend to be oriented either in the spectral or in the temporal direction, with truly “diagonal” Gabor functions rarely being necessary for reconstruction of an STRF's main characteristics. As a toy example for the applicability of STRF and Gabor-STRF filters to auditory detection tasks, we use STRF filters as features in an automatic event detection task and compare them to idealized Gabor filters and mel-frequency cepstral coefficients (MFCCs). STRFs classify a set of six everyday sounds with an accuracy similar to reference Gabor features (94% recognition rate). Spectro-temporal STRF and Gabor features outperform reference spectral MFCCs in quiet and in low noise conditions (down to 0 dB signal to noise ratio). PMID:28232791

  11. A Phenomenological Model of the Electrically Stimulated Auditory Nerve Fiber: Temporal and Biphasic Response Properties.

    Science.gov (United States)

    Horne, Colin D F; Sumner, Christian J; Seeber, Bernhard U

    2016-01-01

    We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs). The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability) under both monophasic and cathodic-anodic biphasic stimulation, without changing the model's parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG) of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions. Our work extends the stochastic leaky integrate and fire (SLIF) neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.

  12. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    the location of the lateral belt and parabelt with respect to gross anatomical landmarks. Architectonic criteria for the core, lateral belt, and parabelt were readily adapted from monkey to human. Additionally, we found evidence for an architectonic subdivision within the parabelt, present in both species......The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...... that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  13. Gradients and modulation of K(+ channels optimize temporal accuracy in networks of auditory neurons.

    Directory of Open Access Journals (Sweden)

    Leonard K Kaczmarek

    Full Text Available Accurate timing of action potentials is required for neurons in auditory brainstem nuclei to encode the frequency and phase of incoming sound stimuli. Many such neurons express "high threshold" Kv3-family channels that are required for firing at high rates (> -200 Hz. Kv3 channels are expressed in gradients along the medial-lateral tonotopic axis of the nuclei. Numerical simulations of auditory brainstem neurons were used to calculate the input-output relations of ensembles of 1-50 neurons, stimulated at rates between 100-1500 Hz. Individual neurons with different levels of potassium currents differ in their ability to follow specific rates of stimulation but all perform poorly when the stimulus rate is greater than the maximal firing rate of the neurons. The temporal accuracy of the combined synaptic output of an ensemble is, however, enhanced by the presence of gradients in Kv3 channel levels over that measured when neurons express uniform levels of channels. Surprisingly, at high rates of stimulation, temporal accuracy is also enhanced by the occurrence of random spontaneous activity, such as is normally observed in the absence of sound stimulation. For any pattern of stimulation, however, greatest accuracy is observed when, in the presence of spontaneous activity, the levels of potassium conductance in all of the neurons is adjusted to that found in the subset of neurons that respond better than their neighbors. This optimization of response by adjusting the K(+ conductance occurs for stimulus patterns containing either single and or multiple frequencies in the phase-locking range. The findings suggest that gradients of channel expression are required for normal auditory processing and that changes in levels of potassium currents across the nuclei, by mechanisms such as protein phosphorylation and rapid changes in channel synthesis, adapt the nuclei to the ongoing auditory environment.

  14. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  15. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  16. Modulation of auditory evoked responses to spectral and temporal changes by behavioral discrimination training

    Directory of Open Access Journals (Sweden)

    Okamoto Hidehiko

    2009-12-01

    Full Text Available Abstract Background Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency. Cortical evoked responses like N1 and mismatch negativity (MMN triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG. We investigated (i how the auditory cortex reacts to pitch difference (in carrier frequency and changes in temporal features (modulation frequency of AM tones and (ii how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex. Results The results showed that, additionally to an improvement of the behavioral discrimination performance, discrimination training of carrier frequency changes significantly modulates the MMN and N1 response amplitudes after the training. This process was accompanied by an attention switch to the deviant stimulus after the training procedure identified by the occurrence of a P3a component. In contrast, the training in discrimination of modulation frequency was not sufficient to improve the behavioral discrimination performance and to alternate the cortical response (MMN to the modulation frequency change. The N1 amplitude, however, showed significant increase after and one week after the training. Similar to the training in carrier frequency discrimination, a long lasting

  17. Congenital external auditory canal atresia and stenosis: temporal bone CT findings

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Hoon; Kim, Bum Soo; Jung, So Lyung; Kim, Young Joo; Chun, Ho Jong; Choi, Kyu Ho; Park, Shi Nae [College of Medicine, Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2002-04-01

    To determine the computed tomographic (CT) findings of atresia and stenosis of the external auditory canal (EAC), and to describe associated abnormalities in surrounding structures. We retrospectively reviewed the axial and coronal CT images of the temporal bone in 15 patients (M:F=8:7;mean age, 15.8 years) with 16 cases of EAC atresia (unilateral n=11, bilateral n=1) and EAC stenosis (unilateral n=3). Associated abnormalities of the EAC, tympanic cavity, ossicles, mastoid air cells, eustachian tube, facial nerve course, mandibular condyle and condylar fossa, sigmoid sinus and jugular bulb, and the base of the middle cranial fossa were evaluated. Thirteen cases of bony EAC atresia (one bilateral), with an atretic bony plate, were noted, and one case of unilateral membranous atresia, in which a soft tissue the EAC. A unilateral lesion occurred more frequently on the right temporal bone (n=8, 73%). Associated abnormalities included a small tympanic cavity (n=8, 62%), decreased mastoid pneumatization (n=8, 62%), displacement of the mandibular condyle and the posterior wall of the condylar fossa (n=7, 54%), dilatation of the Eustachian tube (n=7, 54%), and inferior displacement of the temporal fossa base (n=8, 62%). Abnormalities of ossicles were noted in the malleolus (n=12, 92%), incus (n=10, 77%) and stapes (n=6, 46%). The course of the facial nerve was abnormal in four cases, and abnormality of the auditory canal was noted in one. Among three cases of EAC stenosis, ossicular aplasia was observed in one, and in another the location of the mandibular condyle and condylar fossa was abnormal. In the remaining case there was no associated abnormality. Atresia of the EAC is frequently accompanied by abnormalities of the middle ear cavity, ossicles, and adjacent structures other than the inner ear. For patients with atresia and stenosis of this canal, CT of the temporal bone is essentially helpful in evaluating these associated abnormalities.

  18. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  19. Multisensory temporal integration: Task and stimulus dependencies

    Science.gov (United States)

    Stevenson, Ryan A.; Wallace, Mark T.

    2013-01-01

    The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the importance of temporal factors in multisensory integration under both normal and atypical circumstances such as autism and dyslexia, the TBW has been measured with a variety of experimental protocols that differ according to criterion, task, and stimulus type, making comparisons across experiments difficult. In the current study we attempt to elucidate the role that these various factors play in the measurement of this important construct. The results show a strong effect of stimulus type, with the TBW assessed with speech stimuli being both larger and more symmetrical than that seen using simple and complex non-speech stimuli. These effects are robust across task and statistical criteria, and are highly consistent within individuals, suggesting substantial overlap in the neural and cognitive operations that govern multisensory temporal processes. PMID:23604624

  20. Multisensory Temporal Integration in Autism Spectrum Disorders

    Science.gov (United States)

    Siemann, Justin K.; Schneider, Brittany C.; Eberly, Haley E.; Woynaroski, Tiffany G.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    The new DSM-5 diagnostic criteria for autism spectrum disorders (ASDs) include sensory disturbances in addition to the well-established language, communication, and social deficits. One sensory disturbance seen in ASD is an impaired ability to integrate multisensory information into a unified percept. This may arise from an underlying impairment in which individuals with ASD have difficulty perceiving the temporal relationship between cross-modal inputs, an important cue for multisensory integration. Such impairments in multisensory processing may cascade into higher-level deficits, impairing day-to-day functioning on tasks, such as speech perception. To investigate multisensory temporal processing deficits in ASD and their links to speech processing, the current study mapped performance on a number of multisensory temporal tasks (with both simple and complex stimuli) onto the ability of individuals with ASD to perceptually bind audiovisual speech signals. High-functioning children with ASD were compared with a group of typically developing children. Performance on the multisensory temporal tasks varied with stimulus complexity for both groups; less precise temporal processing was observed with increasing stimulus complexity. Notably, individuals with ASD showed a speech-specific deficit in multisensory temporal processing. Most importantly, the strength of perceptual binding of audiovisual speech observed in individuals with ASD was strongly related to their low-level multisensory temporal processing abilities. Collectively, the results represent the first to illustrate links between multisensory temporal function and speech processing in ASD, strongly suggesting that deficits in low-level sensory processing may cascade into higher-order domains, such as language and communication. PMID:24431427

  1. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  2. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  3. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  4. Receptive amusia: temporal auditory processing deficit in a professional musician following a left temporo-parietal lesion.

    Science.gov (United States)

    Di Pietro, Marie; Laganaro, Marina; Leemann, Béatrice; Schnider, Armin

    2004-01-01

    This study examined the musical processing in a professional musician who suffered from amusia after a left temporo-parietal stroke. The patient showed preserved metric judgement and normal performance in all aspects of melodic processing. By contrast, he lost the ability to discriminate or reproduce rhythms. Arrhythmia was only observed in the auditory modality: discrimination of auditorily presented rhythms was severely impaired, whereas performance was normal in the visual modality. Moreover, a length effect was observed in discrimination of rhythm, while this was not the case for melody discrimination. The arrhythmia could not be explained by low-level auditory processing impairments such as interval and length discrimination and the impairment was limited to auditory input, since the patient produced correct rhythmic patterns from a musical score. Since rhythm processing was selectively disturbed in the auditory modality, the arrhythmia cannot be attributed to a impairment of supra-modal temporal processing. Rather, our findings suggest modality-specific encoding of musical temporal information. Besides, it is proposed that the processing of auditory rhythmic sequences involves a specific left hemispheric temporal buffer.

  5. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance

    Science.gov (United States)

    Stevenson, Ryan A.; Fister, Juliane Krueger; Barnett, Zachary P.; Nidiffer, Aaron R.; Wallace, Mark T.

    2012-01-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors, and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus. PMID:22447249

  6. Temporal expectation and attention jointly modulate auditory oscillatory activity in the beta band.

    Science.gov (United States)

    Todorovic, Ana; Schoffelen, Jan-Mathijs; van Ede, Freek; Maris, Eric; de Lange, Floris P

    2015-01-01

    The neural response to a stimulus is influenced by endogenous factors such as expectation and attention. Current research suggests that expectation and attention exert their effects in opposite directions, where expectation decreases neural activity in sensory areas, while attention increases it. However, expectation and attention are usually studied either in isolation or confounded with each other. A recent study suggests that expectation and attention may act jointly on sensory processing, by increasing the neural response to expected events when they are attended, but decreasing it when they are unattended. Here we test this hypothesis in an auditory temporal cueing paradigm using magnetoencephalography in humans. In our study participants attended to, or away from, tones that could arrive at expected or unexpected moments. We found a decrease in auditory beta band synchrony to expected (versus unexpected) tones if they were unattended, but no difference if they were attended. Modulations in beta power were already evident prior to the expected onset times of the tones. These findings suggest that expectation and attention jointly modulate sensory processing.

  7. Superior Temporal Activity for the Retrieval Process of Auditory-Word Associations

    Directory of Open Access Journals (Sweden)

    Toshimune Kambara

    2011-10-01

    Full Text Available Previous neuroimaging studies have reported that learning multisensory associations involves the superior temporal regions (Tanabe et al, 2005. However, the neural mechanisms underlying the retrieval of multi-sensory associations were unclear. This functional MRI (fMRI study investigated brain activations during the retrieval of multi-sensory associations. Eighteen right-handed college-aged Japanese participants learned associations between meaningless pictures and words (Vw, meaningless sounds and words (Aw, and meaningless sounds and visual words (W. During fMRI scanning, participants were presented with old and new words and were required to judge whether the words were included in the conditions of Vw, Aw, W or New. We found that the left superior temporal region showed greater activity during the retrieval of words learned in Aw than in Vw, whereas no region showed greater activity for the Vw condition versus the Aw condition (k > 10, p < .001, uncorrected. Taken together, the left superior temporal region could play an essential role in the retrieval process of auditory-word associations.

  8. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance....... The purpose of the present thesis is to develop a computational auditory processing model that accounts for a large variety of experimental data on CMR, in order to obtain a more thorough understanding of the basic processing principles underlying the processing of across-frequency modulations. The second...

  9. Effects of deafness and cochlear implant use on temporal response characteristics in cat primary auditory cortex.

    Science.gov (United States)

    Fallon, James B; Shepherd, Robert K; Nayagam, David A X; Wise, Andrew K; Heffer, Leon F; Landry, Thomas G; Irvine, Dexter R F

    2014-09-01

    We have previously shown that neonatal deafness of 7-13 months duration leads to loss of cochleotopy in the primary auditory cortex (AI) that can be reversed by cochlear implant use. Here we describe the effects of a similar duration of deafness and cochlear implant use on temporal processing. Specifically, we compared the temporal resolution of neurons in AI of young adult normal-hearing cats that were acutely deafened and implanted immediately prior to recording with that in three groups of neonatally deafened cats. One group of neonatally deafened cats received no chronic stimulation. The other two groups received up to 8 months of either low- or high-rate (50 or 500 pulses per second per electrode, respectively) stimulation from a clinical cochlear implant, initiated at 10 weeks of age. Deafness of 7-13 months duration had no effect on the duration of post-onset response suppression, latency, latency jitter, or the stimulus repetition rate at which units responded maximally (best repetition rate), but resulted in a statistically significant reduction in the ability of units to respond to every stimulus in a train (maximum following rate). None of the temporal response characteristics of the low-rate group differed from those in acutely deafened controls. In contrast, high-rate stimulation had diverse effects: it resulted in decreased suppression duration, longer latency and greater jitter relative to all other groups, and an increase in best repetition rate and cut-off rate relative to acutely deafened controls. The minimal effects of moderate-duration deafness on temporal processing in the present study are in contrast to its previously-reported pronounced effects on cochleotopy. Much longer periods of deafness have been reported to result in significant changes in temporal processing, in accord with the fact that duration of deafness is a major factor influencing outcome in human cochlear implantees.

  10. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  11. Morphometrical Study of the Temporal Bone and Auditory Ossicles in Guinea Pig

    Directory of Open Access Journals (Sweden)

    Ahmadali Mohammadpour

    2011-03-01

    Full Text Available In this research, anatomical descriptions of the structure of the temporal bone and auditory ossicles have been performed based on dissection of ten guinea pigs. The results showed that, in guinea pig temporal bone was similar to other animals and had three parts; squamous, tympanic and petrous .The tympanic part was much better developed and consisted of oval shaped tympanic bulla with many recesses in tympanic cavity. The auditory ossicles of guinea pig concluded of three small bones; malleus, incus and stapes but the head of the malleus and the body of incus were fused and forming a malleoincudal complex. The average of morphometric parameters showed that the malleus was 3.53 ± 0.22 mm in total length. In addition to head and handle, the malleus had two distinct process; lateral and muscular. The incus had a total length 1.23 ± 0.02mm. It had long and short crus although the long crus was developed better than short crus. The lenticular bone was a round bone that articulated with the long crus of incus. The stapes had a total length 1.38 ± 0.04mm. The anterior crus(0.86 ± 0.08mm was larger than posterior crus (0.76 ± 0.08mm. It is concluded that, in the guinea pig, the malleus and the incus are fused, forming a junction called incus-malleus, while in the other animals these are separate bones. The stapes is larger and has a triangular shape and the anterior and posterior crus are thicker than other rodents. Therefore, for otological studies, the guinea pig is a good lab animal.

  12. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  13. Properties of Auditory Temporal Integration Revealed by Mismatch Negativity

    Science.gov (United States)

    2007-11-02

    Stimulus de- viance and evoked potentials,” Biological Psychology, 14, pp.53– 98, 1982. [8] A. Baddeley, “Working memory,” in The Cognitive Neuro- sciences, M. S. Gazzaniga , Ed., Cambridge, MA: MIT Press, 1995, pp.755–764.

  14. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways.

  15. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT

    Directory of Open Access Journals (Sweden)

    Sahar Mohammad Esmaeilzadeh

    2013-10-01

    Full Text Available Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT and play therapy (PT. There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children.   Materials and Methods: In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected.    Results: Recent technologies have brought about great advancement in the field of hearing disorders. Now these impairments can be detected at birth, and in the majority of cases, hearing impaired children can develop fluent spoken language through audition. According to researches on the relationship between hearing impaired children’s communication and language skills and different approaches of therapy, it is known that learning through listening and

  16. Temporal Resolution of ChR2 and Chronos in an Optogenetic-based Auditory Brainstem Implant Model: Implications for the Development and Application of Auditory Opsins

    Science.gov (United States)

    Hight, A. E.; Kozin, Elliott D.; Darrow, Keith; Lehmann, Ashton; Boyden, Edward; Brown, M. Christian; Lee, Daniel J.

    2015-01-01

    The contemporary auditory brainstem implant (ABI) performance is limited by reliance on electrical stimulation with its accompanying channel cross talk and current spread to non-auditory neurons. A new generation ABI based on optogenetic-technology may ameliorate limitations fundamental to electrical neurostimulation. The most widely studied opsin is channelrhodopsin-2 (ChR2); however, its relatively slow kinetic properties may prevent the encoding of auditory information at high stimulation rates. In the present study, we compare the temporal resolution of light-evoked responses of a recently developed fast opsin, Chronos, to ChR2 in a murine ABI model. Viral mediated gene transfer via a posterolateral craniotomy was used to express Chronos or ChR2 in the mouse nucleus (CN). Following a four to six week incubation period, blue light (473 nm) was delivered via an optical fiber placed directly on the surface of the infected CN, and neural activity was recorded in the contralateral inferior colliculus (IC). Both ChR2 and Chronos evoked sustained responses to all stimuli, even at high driven rates. In addition, optical stimulation evoked excitatory responses throughout the tonotopic axis of the IC. Synchrony of the light-evoked response to stimulus rates of 14–448 pulses/s was higher in Chronos compared to ChR2 mice (p<0.05 at 56, 168, and 224 pulses/s). Our results demonstrate that Chronos has the ability to drive the auditory system at higher stimulation rates than ChR2 and may be a more ideal opsin for manipulation of auditory pathways in future optogenetic-based neuroprostheses. PMID:25598479

  17. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  18. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  19. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features

    Directory of Open Access Journals (Sweden)

    Jancke Lutz

    2007-12-01

    Full Text Available Abstract Background Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT is considered an important cue that is cardinal for phonetic perception. Methods In the present study, we recorded and compared scalp auditory evoked potentials (AEP in response to consonant-vowel-syllables (CV with varying voice-onset-times (VOT and non-speech analogues with varying noise-onset-time (NOT. In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. Results Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds. Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. Conclusion The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal

  20. Sensitivity of cochlear nucleus neurons to spatio-temporal changes in auditory nerve activity.

    Science.gov (United States)

    Wang, Grace I; Delgutte, Bertrand

    2012-12-01

    The spatio-temporal pattern of auditory nerve (AN) activity, representing the relative timing of spikes across the tonotopic axis, contains cues to perceptual features of sounds such as pitch, loudness, timbre, and spatial location. These spatio-temporal cues may be extracted by neurons in the cochlear nucleus (CN) that are sensitive to relative timing of inputs from AN fibers innervating different cochlear regions. One possible mechanism for this extraction is "cross-frequency" coincidence detection (CD), in which a central neuron converts the degree of coincidence across the tonotopic axis into a rate code by preferentially firing when its AN inputs discharge in synchrony. We used Huffman stimuli (Carney LH. J Neurophysiol 64: 437-456, 1990), which have a flat power spectrum but differ in their phase spectra, to systematically manipulate relative timing of spikes across tonotopically neighboring AN fibers without changing overall firing rates. We compared responses of CN units to Huffman stimuli with responses of model CD cells operating on spatio-temporal patterns of AN activity derived from measured responses of AN fibers with the principle of cochlear scaling invariance. We used the maximum likelihood method to determine the CD model cell parameters most likely to produce the measured CN unit responses, and thereby could distinguish units behaving like cross-frequency CD cells from those consistent with same-frequency CD (in which all inputs would originate from the same tonotopic location). We find that certain CN unit types, especially those associated with globular bushy cells, have responses consistent with cross-frequency CD cells. A possible functional role of a cross-frequency CD mechanism in these CN units is to increase the dynamic range of binaural neurons that process cues for sound localization.

  1. The temporal window of audio-tactile integration in speech perception

    Science.gov (United States)

    Gick, Bryan; Ikegami, Yoko; Derrick, Donald

    2010-01-01

    Asynchronous cross-modal information is integrated asymmetrically in audio-visual perception. To test whether this asymmetry generalizes across modalities, auditory (aspirated “pa” and unaspirated “ba” stops) and tactile (slight, inaudible, cutaneous air puffs) signals were presented synchronously and asynchronously. Results were similar to previous AV studies: the temporal window of integration for the enhancement effect (but not the interference effect) was asymmetrical, allowing up to 200 ms of asynchrony when the puff followed the audio signal, but only up to 50 ms when the puff preceded the audio signal. These findings suggest that perceivers accommodate differences in physical transmission speed of different multimodal signals. PMID:21110549

  2. Periodicity extraction in the anuran auditory nerve. II: Phase and temporal fine structure.

    Science.gov (United States)

    Simmons, A M; Reese, G; Ferragamo, M

    1993-06-01

    phase locking to simple sinusoids. Increasing stimulus intensity also shifts the synchronized responses of some fibers away from the fundamental frequency to one of the low-frequency harmonics in the stimuli. These data suggest that the synchronized firing of bullfrog eighth nerve fibers operates to extract the waveform periodicity of complex, multiple-harmonic stimuli, and this periodicity extraction is influenced by the phase spectrum and temporal fine structure of the stimuli. The similarity in response patterns of amphibian papilla and basilar papilla fibers argues that the frog auditory system employs primarily a temporal mechanism for extraction of first harmonic periodicity.

  3. Subthreshold K+ Channel Dynamics Interact With Stimulus Spectrum to Influence Temporal Coding in an Auditory Brain Stem Model

    Science.gov (United States)

    Day, Mitchell L.; Doiron, Brent; Rinzel, John

    2013-01-01

    Neurons in the auditory brain stem encode signals with exceptional temporal precision. A low-threshold potassium current, IKLT, present in many auditory brain stem structures and thought to enhance temporal encoding, facilitates spike selection of rapid input current transients through an associated dynamic gate. Whether the dynamic nature of IKLT interacts with the timescales in spectrally rich input to influence spike encoding remains unclear. We examine the general influence of IKLT on spike encoding of stochastic stimuli using a pattern classification analysis between spike responses from a ventral cochlear nucleus (VCN) model containing IKLT, and the same model with the dynamics removed. The influence of IKLT on spike encoding depended on the spectral content of the current stimulus such that maximal IKLT influence occurred for stimuli with power concentrated at frequencies low enough (<500 Hz) to allow IKLT activation. Further, broadband stimuli significantly decreased the influence of IKLT on spike encoding, suggesting that broadband stimuli are not well suited for investigating the influence of some dynamic membrane nonlinearities. Finally, pattern classification on spike responses was performed for physiologically realistic conductance stimuli created from various sounds filtered through an auditory nerve (AN) model. Regardless of the sound, the synaptic input arriving at VCN had similar low-pass power spectra, which led to a large influence of IKLT on spike encoding, suggesting that the subthreshold dynamics of IKLT plays a significant role in shaping the response of real auditory brain stem neurons. PMID:18057115

  4. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  5. Encoding of sound localization cues by an identified auditory interneuron: effects of stimulus temporal pattern.

    Science.gov (United States)

    Samson, Annie-Hélène; Pollack, Gerald S

    2002-11-01

    An important cue for sound localization is binaural comparison of stimulus intensity. Two features of neuronal responses, response strength, i.e., spike count and/or rate, and response latency, vary with stimulus intensity, and binaural comparison of either or both might underlie localization. Previous studies at the receptor-neuron level showed that these response features are affected by the stimulus temporal pattern. When sounds are repeated rapidly, as occurs in many natural sounds, response strength decreases and latency increases, resulting in altered coding of localization cues. In this study we analyze binaural cues for sound localization at the level of an identified pair of interneurons (the left and right AN2) in the cricket auditory system, with emphasis on the effects of stimulus temporal pattern on binaural response differences. AN2 spike count decreases with rapidly repeated stimulation and latency increases. Both effects depend on stimulus intensity. Because of the difference in intensity at the two ears, binaural differences in spike count and latency change as stimulation continues. The binaural difference in spike count decreases, whereas the difference in latency increases. The proportional changes in response strength and in latency are greater at the interneuron level than at the receptor level, suggesting that factors in addition to decrement of receptor responses are involved. Intracellular recordings reveal that a slowly building, long-lasting hyperpolarization is established in AN2. At the same time, the level of depolarization reached during the excitatory postsynaptic potential (EPSP) resulting from each sound stimulus decreases. Neither these effects on membrane potential nor the changes in spiking response are accounted for by contralateral inhibition. Based on comparison of our results with earlier behavioral experiments, it is unlikely that crickets use the binaural difference in latency of AN2 responses as the main cue for

  6. Identified auditory neurons in the cricket Gryllus rubens: temporal processing in calling song sensitive units.

    Science.gov (United States)

    Farris, Hamilton E; Mason, Andrew C; Hoy, Ronald R

    2004-07-01

    This study characterizes aspects of the anatomy and physiology of auditory receptors and certain interneurons in the cricket Gryllus rubens. We identified an 'L'-shaped ascending interneuron tuned to frequencies > 15 kHz (57 dB SPL threshold at 20 kHz). Also identified were two intrasegmental 'omega'-shaped interneurons that were broadly tuned to 3-65 kHz, with best sensitivity to frequencies of the male calling song (5 kHz, 52 dB SPL). The temporal sensitivity of units excited by calling song frequencies were measured using sinusoidally amplitude modulated stimuli that varied in both modulation rate and depth, parameters that vary with song propagation distance and the number of singing males. Omega cells responded like low-pass filters with a time constant of 42 ms. In contrast, receptors significantly coded modulation rates up to the maximum rate presented (85 Hz). Whereas omegas required approximately 65% modulation depth at 45 Hz (calling song AM) to elicit significant synchrony coding, receptors tolerated a approximately 50% reduction in modulation depth up to 85 Hz. These results suggest that omega cells in G. rubens might not play a role in detecting song modulation per se at increased distances from a singing male.

  7. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  8. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music.

    Science.gov (United States)

    Lee, Hweeling; Noppeney, Uta

    2014-01-01

    This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  9. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  10. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    2007-01-01

    Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up-to-date...

  11. Effects of temporal trial-by-trial cuing on early and late stages of auditory processing: evidence from event-related potentials.

    Science.gov (United States)

    Lampar, Alexa; Lange, Kathrin

    2011-08-01

    Temporal-cuing studies show faster responding to stimuli at an attended versus unattended time point. Whether the mechanisms involved in this temporal orienting of attention are located early or late in the processing stream has not been answered unequivocally. To address this question, we measured event-related potentials in two versions of an auditory temporal cuing task: Stimuli at the uncued time point either required a response (Experiment 1) or did not (Experiment 2). In both tasks, attention was oriented to the cued time point, but attention could be selectively focused on the cued time point only in Experiment 2. In both experiments, temporal orienting was associated with a late positivity in the timerange of the P3. An early enhancement in the timerange of the auditory N1 was observed only in Experiment 2. Thus, temporal attention improves auditory processing at early sensory levels only when it can be focused selectively.

  12. Activity-dependent transmission and integration control the timescales of auditory processing at an inhibitory synapse.

    Science.gov (United States)

    Ammer, Julian J; Siveke, Ida; Felmy, Felix

    2015-06-15

    To capture the context of sensory information, neural networks must process input signals across multiple timescales. In the auditory system, a prominent change in temporal processing takes place at an inhibitory GABAergic synapse in the dorsal nucleus of the lateral lemniscus (DNLL). At this synapse, inhibition outlasts the stimulus by tens of milliseconds, such that it suppresses responses to lagging sounds, and is therefore implicated in echo suppression. Here, we untangle the cellular basis of this inhibition. We demonstrate with in vivo whole-cell patch-clamp recordings in Mongolian gerbils that the duration of inhibition increases with sound intensity. Activity-dependent spillover and asynchronous release translate the high presynaptic firing rates found in vivo into a prolonged synaptic output in acute slice recordings. A key mechanism controlling the inhibitory time course is the passive integration of the hyperpolarizing inhibitory conductance. This prolongation depends on the synaptic conductance amplitude. Computational modeling shows that this prolongation is a general mechanism and relies on a non-linear effect caused by synaptic conductance saturation when approaching the GABA reversal potential. The resulting hyperpolarization generates an efficient activity-dependent suppression of action potentials without affecting the threshold or gain of the input-output function. Taken together, the GABAergic inhibition in the DNLL is adjusted to the physiologically relevant duration by passive integration of inhibition with activity-dependent synaptic kinetics. This change in processing timescale combined with the reciprocal connectivity between the DNLLs implements a mechanism to suppress the distracting localization cues of echoes and helps to localize the initial sound source reliably.

  13. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene.

  14. [Development of auditory-visual spatial integration using saccadic response time as the index].

    Science.gov (United States)

    Kato, Masaharu; Konishi, Kaoru; Kurosawa, Makiko; Konishi, Yukuo

    2006-05-01

    We measured saccadic response time (SRT) to investigate developmental changes related to spatially aligned or misaligned auditory and visual stimuli responses. We exposed 4-, 5-, and 11-month-old infants to ipsilateral or contralateral auditory-visual stimuli and monitored their eye movements using an electro-oculographic (EOG) system. The SRT analyses revealed four main results. First, saccades were triggered by visual stimuli but not always triggered by auditory stimuli. Second, SRTs became shorter as the children grew older. Third, SRTs for the ipsilateral and visual-only conditions were the same in all infants. Fourth, SRTs for the contralateral condition were longer than for the ipsilateral and visual-only conditions in 11-month-old infants but were the same for all three conditions in 4- and 5-month-old infants. These findings suggest that infants acquire the function of auditory-visual spatial integration underlying saccadic eye movement between the ages of 5 and 11 months. The dependency of SRTs on the spatial configuration of auditory and visual stimuli can be explained by cortical control of the superior colliculus. Our finding of no differences in SRTs between the ipsilateral and visual-only conditions suggests that there are multiple pathways for controlling the superior colliculus and that these pathways have different developmental time courses.

  15. The temporal relationship between the brainstem and primary cortical auditory evoked potentials.

    Science.gov (United States)

    Shaw, N A

    1995-10-01

    Many methods are employed in order to define more precisely the generators of an evoked potential (EP) waveform. One technique is to compare the timing of an EP whose origin is well established with that of one whose origin is less certain. In the present article, the latency of the primary cortical auditory evoked potential (PCAEP) was compared to each of the seven subcomponents which compose the brainstem auditory evoked potential (BAEP). The data for this comparison was derived from a retrospective analysis of previous recordings of the PCAEP and BAEP. Central auditory conduction time (CACT) was calculated by subtracting the latency of the cochlear nucleus BAEP component (wave III) from that of the PCAEP. It was found that CACT in humans is 12 msec which is more than double that of central somatosensory conduction time. The interpeak latencies between BAEP waves V, VI, and VII and the PCAEP were also calculated. It was deduced that all three waves must have an origin rather more caudally within the central auditory system than is commonly supposed. In addition, it is demonstrated that the early components of the middle latency AEP (No and Na) largely reside within the time domain between the termination of the BAEP components and the PCAEP which would be consistent with their being far field reflections of midbrain and subcortical auditory activity. It is concluded that as the afferent volley ascends the central auditory pathways, it generates not a sequence of high frequency BAEP responses but rather a succession of slower post-synaptic waves. The only means of reconciling the timing of the BAEP waves with that of the PCAEP is to assume that the generation of all the BAEP components must be largely restricted to a quite confined region within the auditory nerve and the lower half of the pons.

  16. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    Science.gov (United States)

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features.

  17. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  18. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  19. Sparse Spectro-Temporal Receptive Fields Based on Multi-Unit and High-Gamma Responses in Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Rick L Jenison

    Full Text Available Spectro-Temporal Receptive Fields (STRFs were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM. A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl's gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl's gyrus recordings elicited by click-train stimuli.

  20. ERPs reveal the temporal dynamics of auditory word recognition in specific language impairment.

    Science.gov (United States)

    Malins, Jeffrey G; Desroches, Amy S; Robertson, Erin K; Newman, Randy Lynn; Archibald, Lisa M D; Joanisse, Marc F

    2013-07-01

    We used event-related potentials (ERPs) to compare auditory word recognition in children with specific language impairment (SLI group; N=14) to a group of typically developing children (TD group; N=14). Subjects were presented with pictures of items and heard auditory words that either matched or mismatched the pictures. Mismatches overlapped expected words in word-onset (cohort mismatches; see: DOLL, hear: dog), rhyme (CONE -bone), or were unrelated (SHELL -mug). In match trials, the SLI group showed a different pattern of N100 responses to auditory stimuli compared to the TD group, indicative of early auditory processing differences in SLI. However, the phonological mapping negativity (PMN) response to mismatching items was comparable across groups, suggesting that just like TD children, children with SLI are capable of establishing phonological expectations and detecting violations of these expectations in an online fashion. Perhaps most importantly, we observed a lack of attenuation of the N400 for rhyming words in the SLI group, which suggests that either these children were not as sensitive to rhyme similarity as their typically developing peers, or did not suppress lexical alternatives to the same extent. These findings help shed light on the underlying deficits responsible for SLI.

  1. Dissociation between spatial and temporal integration mechanisms in Vernier fusion.

    Science.gov (United States)

    Drewes, Jan; Zhu, Weina; Melcher, David

    2014-12-01

    The visual system constructs a percept of the world across multiple spatial and temporal scales. This raises the questions of whether different scales involve separate integration mechanisms and whether spatial and temporal factors are linked via spatio-temporal reference frames. We investigated this using Vernier fusion, a phenomenon in which the features of two Vernier stimuli presented in close spatio-temporal proximity are fused into a single percept. With increasing spatial offset, perception changes dramatically from a single percept into apparent motion and later, at larger offsets, into two separately perceived stimuli. We tested the link between spatial and temporal integration by presenting two successive Vernier stimuli presented at varying spatial and temporal offsets. The second Vernier either had the same or the opposite offset as the first. We found that the type of percept depended not only on spatial offset, as reported previously, but interacted with the temporal parameter as well. At temporal separations around 30-40 ms the majority of trials were perceived as motion, while above 70 ms predominantly two separate stimuli were reported. The dominance of the second Vernier varied systematically with temporal offset, peaking around 40 ms ISI. Same-offset conditions showed increasing amounts of perceived separation at large ISIs, but little dependence on spatial offset. As subjects did not always completely fuse stimuli, we separated trials by reported percept (single/fusion, motion, double/segregation). We found systematic indications of spatial fusion even on trials in which subjects perceived temporal segregation. These findings imply that spatial integration/fusion may occur even when the stimuli are perceived as temporally separate entities, suggesting that the mechanisms responsible for temporal segregation and spatial integration may not be mutually exclusive.

  2. HIT, hallucination focused integrative treatment as early intervention in psychotic adolescents with auditory hallucinations : a pilot study

    NARCIS (Netherlands)

    Jenner, JA; van de Willige, G

    2001-01-01

    Objective: Early intervention in psychosis is considered important in relapse prevention. Limited results of monotherapies prompt to development of multimodular programmes. The present study tests feasibility and effectiveness of HIT, an integrative early intervention treatment for auditory hallucin

  3. Visual-Auditory Integration during Speech Imitation in Autism

    Science.gov (United States)

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  4. Spatial interactions determine temporal feature integration as revealed by unmasking

    Directory of Open Access Journals (Sweden)

    Michael H. Herzog

    2006-01-01

    Full Text Available Feature integration is one of the most fundamental problems in neuroscience. In a recent contribution, we showed that a trailing grating can diminish the masking effects one vernier exerts on another, preceding vernier. Here, we show that this temporal unmasking depends on neural spatial interactions related to the trailing grating. Hence, our paradigm allows us to study the spatio-temporal interactions underlying feature integration.

  5. Processamento temporal, localização e fechamento auditivo em portadores de perda auditiva unilateral Temporal processing, localization and auditory closure in individuals with unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Regiane Nishihata

    2012-01-01

    , sound localization, and auditory closure, and to investigate possible associations with complaints of learning, communication and language difficulties in individuals with unilateral hearing loss. METHODS: Participants were 26 individuals with ages between 8 and 15 years, divided into two groups: Unilateral hearing loss group; and Normal hearing group. Each group was composed of 13 individuals, matched by gender, age and educational level. All subjects were submitted to anamnesis, peripheral hearing evaluation, and auditory processing evaluation through behavioral tests of sound localization, sequential memory, Random Detection Gap test, and speech-in-noise test. Nonparametric statistical tests were used to compare the groups, considering the presence or absence of hearing loss and the ear with hearing loss. RESULTS: Unilateral hearing loss started during preschool, and had unknown or identified etiologies, such as meningitis, traumas or mumps. Most individuals reported delays in speech, language and learning developments, especially those with hearing loss in the right ear. The group with hearing loss had worse responses in the abilities of temporal ordering and resolution, sound localization and auditory closure. Individuals with hearing loss in the left ear showed worse results than those with hearing loss in the right ear in all abilities, except in sound localization. CONCLUSION: The presence of unilateral hearing loss causes sound localization, auditory closure, temporal ordering and temporal resolution difficulties. Individuals with unilateral hearing loss in the right ear have more complaints than those with unilateral hearing loss in the left ear. Individuals with hearing loss in the left ear have more difficulties in auditory closure, temporal resolution, and temporal ordering.

  6. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  7. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study.

    Science.gov (United States)

    van Zuijen, Titia L; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-10-18

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or without familial risk for dyslexia to investigate whether their auditory system was able to detect a temporal change in a tone pattern. The children were followed longitudinally and performed an intelligence- and language development test at ages 4 and 4.5 years. Literacy related skills were measured at the beginning of second grade, and word- and pseudo-word reading fluency were measured at the end of second grade. The EEG responses showed that control children could detect the temporal change as indicated by a mismatch response (MMR). The MMR was not observed in at-risk children. Furthermore, the fronto-central MMR amplitude correlated with preliterate language comprehension and with later word reading fluency, but not with phonological awareness. We conclude that temporal auditory processing differentiates young children at risk for dyslexia from controls and is a precursor of preliterate language comprehension and reading fluency.

  8. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    Science.gov (United States)

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  9. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding-the inability to see an object when it is surrounded by flankers in the periphery-does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration-the simplest kind of temporal semantic integration-did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study.

  10. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information.

  11. Spatio-temporal data analytics for wind energy integration

    CERN Document Server

    Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief presents spatio-temporal data analytics for wind energy integration using stochastic modeling and optimization methods. It explores techniques for efficiently integrating renewable energy generation into bulk power grids. The operational challenges of wind, and its variability are carefully examined. A spatio-temporal analysis approach enables the authors to develop Markov-chain-based short-term forecasts of wind farm power generation. To deal with the wind ramp dynamics, a support vector machine enhanced Markov model is introduced. The stochastic optimization of economic di

  12. An auditory illusion reveals the role of streaming in the temporal misallocation of perceptual objects.

    Science.gov (United States)

    Mehta, Anahita H; Jacoby, Nori; Yasin, Ifat; Oxenham, Andrew J; Shamma, Shihab A

    2017-02-19

    This study investigates the neural correlates and processes underlying the ambiguous percept produced by a stimulus similar to Deutsch's 'octave illusion', in which each ear is presented with a sequence of alternating pure tones of low and high frequencies. The same sequence is presented to each ear, but in opposite phase, such that the left and right ears receive a high-low-high … and a low-high-low … pattern, respectively. Listeners generally report hearing the illusion of an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. The current explanation of the illusion is that it reflects an illusory feature conjunction of pitch and perceived location. Using psychophysics and electroencephalogram measures, we test this and an alternative hypothesis involving synchronous and sequential stream segregation, and investigate potential neural correlates of the illusion. We find that the illusion of alternating tones arises from the synchronous tone pairs across ears rather than sequential tones in one ear, suggesting that the illusion involves a misattribution of time across perceptual streams, rather than a misattribution of location within a stream. The results provide new insights into the mechanisms of binaural streaming and synchronous sound segregation.This article is part of the themed issue 'Auditory and visual scene analysis'.

  13. Electrophysiological and Behavioral Outcomes of Berard Auditory Integration Training (AIT) in Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Sokhadze, Estate M; Casanova, Manuel F; Tasman, Allan; Brockett, Sally

    2016-12-01

    Autism is a pervasive developmental disorder of childhood characterized by deficits in social interaction, language, and stereotyped behaviors along with a restricted range of interests. It is further marked by an inability to perceive and respond to social and emotional signals in a typical manner. This might due to the functional disconnectivity of networks important for specific aspects of social cognition and behavioral control resulting in deficits of sensory information integration. According to several recent theories sensory processing and integration abnormalities may play an important role in impairments of perception, cognition, and behavior in individuals with autism. Among these sensory abnormalities, auditory perception distortion may contribute to many typical symptoms of autism. The present study used Berard's technique of auditory integration training (AIT) to improve sound integration in children with autism. It also aimed to understand the abnormal neural and functional mechanisms underlying sound processing distortion in autism by incorporating behavioral, psychophysiological and neurophysiological outcomes. It was proposed that exposure to twenty 30-min AIT sessions (total 10 h of training) would result in improved behavioral evaluation scores, improve profile of cardiorespiratory activity, and positively affect both early [N1, mismatch negativity (MMN)] and late (P3) components of evoked potentials in auditory oddball task. Eighteen children with autism spectrum disorder (ASD) participated in the study. A group of 16 typically developing children served as a contrast group in the auditory oddball task. Autonomic outcomes of the study reflected a linear increase of heart rate variability measures and respiration rate. Comparison of evoked potential characteristics of children with ASD versus typically developing children revealed several group difference findings, more specifically, a delayed latency of N1 to rare and frequent stimuli, larger

  14. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy.

    Science.gov (United States)

    Rocha, Caroline Nunes; Miziara, Carmen Silvia Molleis Galego; Manreza, Maria Luiza Giraldes de; Schochat, Eliane

    2010-02-01

    The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE) for subjects with left mesial temporal sclerosis (LMTS) in relation to the behavioral test-Dichotic Digits Test (DDT), event-related potential (P300), and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.

  15. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  16. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  17. [Auditory-evoked responses to a monaural or a binaural click, recorded from the vertex, as in two temporal derivations; effect of interaural time differences (author's transl)].

    Science.gov (United States)

    Botte, M C; Chocholle, R

    1976-01-01

    The auditory-evoked responses have been recorded on 5 subject by vertex, right temporal and left temporal electrodes simultaneously. 30 dB sensation level clicks were used as stimuli; one click was presented only to the right ear, or one click only to the left ear, or one click to the right ear and another click to the left ear with a variable interaural time difference in this latter case (0-150 ms). The N-P amplitude variations and the N and P latency variations have been studied and compared to those observed in the perceived lateralizations of the sound source.

  18. Auditory Integration Training: A Double-Blind Study of Behavioral and Electrophysiological Effects in People with Autism.

    Science.gov (United States)

    Edelson, Stephen M.; Arin, Deborah; Bauman, Margaret; Lukas, Scott E.; Rudy, Jane H.; Sholar, Michelle; Rimland, Bernard

    1999-01-01

    Nineteen individuals with autism either listened to auditory integration training processed music or unprocessed music for 20 half-hour sessions. A significant decrease in Aberrant Behavior Checklist Scores was observed in the experimental group at the 30-month follow-up assessment. In addition, three experimental subjects but no controls showed a…

  19. Representation of spectro-temporal features of spoken words within the P1-N1-P2 and T-complex of the auditory evoked potentials (AEP).

    Science.gov (United States)

    Wagner, Monica; Roychoudhury, Arindam; Campanelli, Luca; Shafer, Valerie L; Martin, Brett; Steinschneider, Mitchell

    2016-02-12

    The purpose of the study was to determine whether P1-N1-P2 and T-complex morphology reflect spectro-temporal features within spoken words that approximate the natural variation of a speaker and whether waveform morphology is reliable at group and individual levels, necessary for probing auditory deficits. The P1-N1-P2 and T-complex to the syllables /pət/ and /sət/ within 70 natural word productions each were examined. EEG was recorded while participants heard nonsense word pairs and performed a syllable identification task to the second word in the pairs. Single trial auditory evoked potentials (AEP) to the first words were analyzed. Results found P1-N1-P2 and T-complex to reflect spectral and temporal feature processing. Also, results identified preliminary benchmarks for single trial response variability for individual subjects for sensory processing between 50 and 600ms. P1-N1-P2 and T-complex, at least at group level, may serve as phenotypic signatures to identify deficits in spectro-temporal feature recognition and to determine area of deficit, the superior temporal plane or lateral superior temporal gyrus.

  20. Calcium-dependent control of temporal processing in an auditory interneuron: a computational analysis.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2010-09-01

    Sensitivity to acoustic amplitude modulation in crickets differs between species and depends on carrier frequency (e.g., calling song vs. bat-ultrasound bands). Using computational tools, we explore how Ca(2+)-dependent mechanisms underlying selective attention can contribute to such differences in amplitude modulation sensitivity. For omega neuron 1 (ON1), selective attention is mediated by Ca(2+)-dependent feedback: [Ca(2+)](internal) increases with excitation, activating a Ca(2+)-dependent after-hyperpolarizing current. We propose that Ca(2+) removal rate and the size of the after-hyperpolarizing current can determine ON1's temporal modulation transfer function (TMTF). This is tested using a conductance-based simulation calibrated to responses in vivo. The model shows that parameter values that simulate responses to single pulses are sufficient in simulating responses to modulated stimuli: no special modulation-sensitive mechanisms are necessary, as high and low-pass portions of the TMTF are due to Ca(2+)-dependent spike frequency adaptation and post-synaptic potential depression, respectively. Furthermore, variance in the two biophysical parameters is sufficient to produce TMTFs of varying bandwidth, shifting amplitude modulation sensitivity like that in different species and in response to different carrier frequencies. Thus, the hypothesis that the size of after-hyperpolarizing current and the rate of Ca(2+) removal can affect amplitude modulation sensitivity is computationally validated.

  1. Aging and Spectro-Temporal Integration of Speech

    Directory of Open Access Journals (Sweden)

    John H. Grose

    2016-10-01

    Full Text Available The purpose of this study was to determine the effects of age on the spectro-temporal integration of speech. The hypothesis was that the integration of speech fragments distributed over frequency, time, and ear of presentation is reduced in older listeners—even for those with good audiometric hearing. Younger, middle-aged, and older listeners (10 per group with good audiometric hearing participated. They were each tested under seven conditions that encompassed combinations of spectral, temporal, and binaural integration. Sentences were filtered into two bands centered at 500 Hz and 2500 Hz, with criterion bandwidth tailored for each participant. In some conditions, the speech bands were individually square wave interrupted at a rate of 10 Hz. Configurations of uninterrupted, synchronously interrupted, and asynchronously interrupted frequency bands were constructed that constituted speech fragments distributed across frequency, time, and ear of presentation. The over-arching finding was that, for most configurations, performance was not differentially affected by listener age. Although speech intelligibility varied across condition, there was no evidence of performance deficits in older listeners in any condition. This study indicates that age, per se, does not necessarily undermine the ability to integrate fragments of speech dispersed across frequency and time.

  2. Medial temporal lobe roles in human path integration.

    Directory of Open Access Journals (Sweden)

    Naohide Yamamoto

    Full Text Available Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  3. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling.

  4. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    Science.gov (United States)

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  5. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  6. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Miyako

    1988-07-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone.

  7. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means...

  8. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    Science.gov (United States)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  9. Preservation of perceptual integration improves temporal stability of bimanual coordination in the elderly: an evidence of age-related brain plasticity.

    Science.gov (United States)

    Blais, Mélody; Martin, Elodie; Albaret, Jean-Michel; Tallet, Jessica

    2014-12-15

    Despite the apparent age-related decline in perceptual-motor performance, recent studies suggest that the elderly people can improve their reaction time when relevant sensory information are available. However, little is known about which sensory information may improve motor behaviour itself. Using a synchronization task, the present study investigates how visual and/or auditory stimulations could increase accuracy and stability of three bimanual coordination modes produced by elderly and young adults. Neurophysiological activations are recorded with ElectroEncephaloGraphy (EEG) to explore neural mechanisms underlying behavioural effects. Results reveal that the elderly stabilize all coordination modes when auditory or audio-visual stimulations are available, compared to visual stimulation alone. This suggests that auditory stimulations are sufficient to improve temporal stability of rhythmic coordination, even more in the elderly. This behavioural effect is primarily associated with increased attentional and sensorimotor-related neural activations in the elderly but similar perceptual-related activations in elderly and young adults. This suggests that, despite a degradation of attentional and sensorimotor neural processes, perceptual integration of auditory stimulations is preserved in the elderly. These results suggest that perceptual-related brain plasticity is, at least partially, conserved in normal aging.

  10. Laevo: A Temporal Desktop Interface for Integrated Knowledge Work

    DEFF Research Database (Denmark)

    Jeuris, Steven; Houben, Steven; Bardram, Jakob

    2014-01-01

    configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different......Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual...... states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which...

  11. Dynamics of Neural Responses in Ferret Primary Auditory Cortex: I. Spectro-Temporal Response Field Characterization by Dynamic Ripple Spectra

    Science.gov (United States)

    1999-01-01

    Eggermont 1993 and references therein; Kvale and Schreiner 1995; Kowalski et al. 1996a; deCharms et al. 1998; Escabi and Schreiner 1999; Theunissen et al...Neurophysiol. 76, 3524–3534. Kvale , M. and C. E. Schreiner (1995). Perturbative m-sequences for auditory systems identification. Acustica 81. Mendelson

  12. Gone in a Flash: Manipulation of Audiovisual Temporal Integration Using Transcranial Magnetic Stimulation

    Directory of Open Access Journals (Sweden)

    Roy eHamilton

    2013-09-01

    Full Text Available While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke, Vieth, Cottrell, and Mattingley (2012, we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams, et al., 2000. Slow repetitive (1Hz TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF, reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  13. Gone in a flash: manipulation of audiovisual temporal integration using transcranial magnetic stimulation

    Science.gov (United States)

    Hamilton, Roy H.; Wiener, Martin; Drebing, Daniel E.; Coslett, H. Branch

    2013-01-01

    While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke et al. (2012), we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams et al., 2000). Slow repetitive (1 Hz) TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF), reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy. PMID:24062701

  14. Gone in a flash: manipulation of audiovisual temporal integration using transcranial magnetic stimulation.

    Science.gov (United States)

    Hamilton, Roy H; Wiener, Martin; Drebing, Daniel E; Coslett, H Branch

    2013-01-01

    While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke et al. (2012), we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams et al., 2000). Slow repetitive (1 Hz) TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF), reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  15. Assessment and Preservation of Auditory Nerve Integrity in the Deafened Guinea Pig

    NARCIS (Netherlands)

    Ramekers, D.

    2014-01-01

    Profound hearing loss is often caused by cochlear hair cell loss. Cochlear implants (CIs) essentially replace hair cells by encoding sound and conveying the signal by means of pulsatile electrical stimulation to the spiral ganglion cells (SGCs) which form the auditory nerve. SGCs progressively degen

  16. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    for the chirp-evoked ABRs indicated a relation to SRTs and the ability to process temporal fine structure. Overall, the results demonstrate the importance of low-frequency temporal processing for speech reception which can be affected even if pure-tone sensitivity is close to normal....

  17. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  18. Improving the efficiency of multisensory integration in older adults: audio-visual temporal discrimination training reduces susceptibility to the sound-induced flash illusion.

    Science.gov (United States)

    Setti, Annalisa; Stapleton, John; Leahy, Daniel; Walsh, Cathal; Kenny, Rose Anne; Newell, Fiona N

    2014-08-01

    From language to motor control, efficient integration of information from different sensory modalities is necessary for maintaining a coherent interaction with the environment. While a number of training studies have focused on training perceptual and cognitive function, only very few are specifically targeted at improving multisensory processing. Discrimination of temporal order or coincidence is a criterion used by the brain to determine whether cross-modal stimuli should be integrated or not. In this study we trained older adults to judge the temporal order of visual and auditory stimuli. We then tested whether the training had an effect in reducing susceptibility to a multisensory illusion, the sound induced flash illusion. Improvement in the temporal order judgement task was associated with a reduction in susceptibility to the illusion, particularly at longer Stimulus Onset Asynchronies, in line with a more efficient multisensory processing profile. The present findings set the ground for more broad training programs aimed at improving older adults׳ cognitive performance in domains in which efficient temporal integration across the senses is required.

  19. [Analysis of auditory information in the brain of the cetacean].

    Science.gov (United States)

    Popov, V V; Supin, A Ia

    2006-01-01

    The cetacean brain specifics involve an exceptional development of the auditory neural centres. The place of projection sensory areas including the auditory that in the cetacean brain cortex is essentially different from that in other mammals. The EP characteristics indicated presence of several functional divisions in the auditory cortex. Physiological studies of the cetacean auditory centres were mainly performed using the EP technique. Of several types of the EPs, the short-latency auditory EP was most thoroughly studied. In cetacean, it is characterised by exceptionally high temporal resolution with the integration time about 0.3 ms which corresponds to the cut-off frequency 1700 Hz. This much exceeds the temporal resolution of the hearing in terranstrial mammals. The frequency selectivity of hearing in cetacean was measured using a number of variants of the masking technique. The hearing frequency selectivity acuity in cetacean exceeds that of most terraneous mammals (excepting the bats). This acute frequency selectivity provides the differentiation among the finest spectral patterns of auditory signals.

  20. Extended temporal integration in rapid serial visual presentation: Attentional control at Lag 1 and beyond.

    Science.gov (United States)

    Akyürek, Elkan G; Wolff, Michael J

    2016-07-01

    In the perception of target stimuli in rapid serial visual presentations, the process of temporal integration plays an important role when two targets are presented in direct succession (at Lag 1), causing them to be perceived as a singular episodic event. This has been associated with increased reversals of target order report and elevated task performance in classic paradigms. Yet, most current models of temporal attention do not incorporate a mechanism of temporal integration and it is currently an open question whether temporal integration is a factor in attentional processing: It might be an independent process, perhaps little more than a sensory sampling rate parameter, isolated to Lag 1, where it leaves the attentional dynamics otherwise unaffected. In the present study, these boundary conditions were tested. Temporal target integration was observed across sequences of three targets spanning an interval of 240ms. Integration rates furthermore depended strongly on bottom-up attentional filtering, and to a lesser degree on top-down control. The results support the idea that temporal integration is an adaptive process that is part of, or at least interacts with, the attentional system. Implications for current models of temporal attention are discussed.

  1. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  2. The deployment of visual attention during temporal integration : An electrophysiological investigation

    NARCIS (Netherlands)

    Akyürek, Elkan G.; Meijerink, Steven K.

    2012-01-01

    The deployment of attention during temporal integration was investigated with event-related potentials. Attentional selection of an integrated percept and an actual singleton were examined. Integration performance was related to modulations of the N2pc, N2, and P3 components. Singleton localization

  3. Bilingualism protects anterior temporal lobe integrity in aging.

    Science.gov (United States)

    Abutalebi, Jubin; Canini, Matteo; Della Rosa, Pasquale A; Sheung, Lo Ping; Green, David W; Weekes, Brendan S

    2014-09-01

    Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.

  4. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten;

    2007-01-01

    . In order to investigate the compatibility of the neural circuit with a linear modulation filterbank analysis as proposed in psychophysical studies, complex stimuli such as tones modulated by the sum of two sinusoids, narrowband noise, and iterated rippled noise were processed by the model. The model....... The present study suggests a neural circuit for the transformation from the temporal to the rate-based code. Due to the neural connectivity of the circuit, bandpass shaped rate modulation transfer functions are obtained that correspond to recorded functions of inferior colliculus IC neurons. In contrast...... to previous modeling studies, the present circuit does not employ a continuously changing temporal parameter to obtain different best modulation frequencies BMFs of the IC bandpass units. Instead, different BMFs are yielded from varying the number of input units projecting onto different bandpass units...

  5. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  6. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  7. Auditory and visual scene analysis: an overview

    Science.gov (United States)

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how ‘scene analysis’ is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044011

  8. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  9. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing. PMID:23626523

  10. Neural interactions in unilateral colliculus and between bilateral colliculi modulate auditory signal processing.

    Science.gov (United States)

    Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai

    2013-01-01

    In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing.

  11. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  12. A hierarchical nest survival model integrating incomplete temporally varying covariates

    Science.gov (United States)

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  13. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Jos J Eggermont

    Full Text Available Multi-electrode array recordings of spike and local field potential (LFP activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs and 492 frequency-tuning curves (FTCs based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm and the 16-40 Hz LFP (7.4 mm, whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz LFP-pair correlations showed that about 16% (9% of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  14. Comparison of LFP-based and spike-based spectro-temporal receptive fields and cross-correlation in cat primary auditory cortex.

    Science.gov (United States)

    Eggermont, Jos J; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg

    2011-01-01

    Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm) and the 16-40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.

  15. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    Science.gov (United States)

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  16. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    Directory of Open Access Journals (Sweden)

    Jordi Navarra

    Full Text Available The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1 or from different spatial positions (Experiment 2. A simultaneity judgment task (SJ was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony was obtained using temporal order judgments (TOJs instead of SJs (Experiment 3. Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading that we most frequently encounter in the outside world (e.g., while perceiving distant events.

  17. Impaired timing adjustments in response to time-varying auditory perturbation during connected speech production in persons who stutter.

    Science.gov (United States)

    Cai, Shanqing; Beal, Deryk S; Ghosh, Satrajit S; Guenther, Frank H; Perkell, Joseph S

    2014-02-01

    Auditory feedback (AF), the speech signal received by a speaker's own auditory system, contributes to the online control of speech movements. Recent studies based on AF perturbation provided evidence for abnormalities in the integration of auditory error with ongoing articulation and phonation in persons who stutter (PWS), but stopped short of examining connected speech. This is a crucial limitation considering the importance of sequencing and timing in stuttering. In the current study, we imposed time-varying perturbations on AF while PWS and fluent participants uttered a multisyllabic sentence. Two distinct types of perturbations were used to separately probe the control of the spatial and temporal parameters of articulation. While PWS exhibited only subtle anomalies in the AF-based spatial control, their AF-based fine-tuning of articulatory timing was substantially weaker than normal, especially in early parts of the responses, indicating slowness in the auditory-motor integration for temporal control.

  18. Temporal aggregation in a periodically integrated autoregressive process

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); H.P. Boswijk (Peter)

    1996-01-01

    textabstractA periodically integrated autoregressive process for a time series which is observed S times per year assumes the presence of S - 1 cointegration relations between the annual series containing the seasonal observations, with the additional feature that these relations are different acros

  19. Event based self-supervised temporal integration for multimodal sensor data.

    Science.gov (United States)

    Barakova, Emilia I; Lourens, Tino

    2005-06-01

    A method for synergistic integration of multimodal sensor data is proposed in this paper. This method is based on two aspects of the integration process: (1) achieving synergistic integration of two or more sensory modalities, and (2) fusing the various information streams at particular moments during processing. Inspired by psychophysical experiments, we propose a self-supervised learning method for achieving synergy with combined representations. Evidence from temporal registration and binding experiments indicates that different cues are processed individually at specific time intervals. Therefore, an event-based temporal co-occurrence principle is proposed for the integration process. This integration method was applied to a mobile robot exploring unfamiliar environments. Simulations showed that integration enhanced route recognition with many perceptual similarities; moreover, they indicate that a perceptual hierarchy of knowledge about instant movement contributes significantly to short-term navigation, but that visual perceptions have bigger impact over longer intervals.

  20. Is It Necessary to Do Temporal Bone Computed Tomography of the Internal Auditory Canal in Tinnitus with Normal Hearing?

    Directory of Open Access Journals (Sweden)

    Tolgar Lutfi Kumral

    2013-01-01

    Full Text Available Objective. To investigate the compression of the vestibulocochlear nerve in the etiology of the tinnitus in the normal hearing ears with temporal bone computed tomography scans. Methods. A prospective nonrandomized study of 30 bilateral tinnitus and 30 normal hearing patients enrolled in this study. Results. A total of 60 patients (ages ranged from 16 to 87 were included. The tinnitus group comprised 11 males and 19 females (mean age 49,50 ± 12,008 and the control group comprised 6 males and 24 females (mean age 39,47 ± 12,544. Regarding the right and left internal acoustic canals measurements (inlet, midcanal, and outlet canal lengths, there were no significant differences between the measurements of the control and tinnitus groups (P>0.005. There was no narrowness in the internal acoustic canal of the tinnitus group compared with the control group. High-frequency audiometric measurements of the right and left ears tinnitus group at 8000, 9000, 10000, 11200, 12500, 14000, 16000, and 18000 Hz frequencies were significantly lower than the control group thresholds (P<0.05. There was high-frequency hearing loss in the tinnitus group. Conclusion. There were no anatomical differences in the etiology of tinnitus rather than physiological degeneration in the nerves.

  1. Cognit activation: a mechanism enabling temporal integration in working memory

    OpenAIRE

    Fuster, Joaquín M.; Bressler, Steven L.

    2012-01-01

    Working memory is critical to the integration of information across time in goal-directed behavior, reasoning and language, yet its neural substrate is unknown. Based on recent research, we propose a mechanism by which the brain can retain working memory for prospective use, thereby bridging time in the perception/action cycle. The essence of the mechanism is the activation of cognits, which consist of distributed, overlapping and interactive cortical networks that in the aggregate encode the...

  2. Perception of global gestalt by temporal integration in simultanagnosia.

    Science.gov (United States)

    Huberle, Elisabeth; Rupek, Paul; Lappe, Markus; Karnath, Hans-Otto

    2009-01-01

    Patients with bilateral parieto-occipital brain damage may show intact processing of individual objects, while their perception of multiple objects is disturbed at the same time. The deficit is termed 'simultanagnosia' and has been discussed in the context of restricted visual working memory and impaired visuo-spatial attention. Recent observations indicated that the recognition of global shapes can be modulated by the spatial distance between individual objects in patients with simultanagnosia and thus is not an all-or-nothing phenomenon depending on spatial continuity. However, grouping mechanisms not only require the spatial integration of visual information, but also involve integration processes over time. The present study investigated motion-defined integration mechanisms in two patients with simultanagnosia. We applied hierarchical organized stimuli of global objects that consisted of coherently moving dots ('shape-from-motion'). In addition, we tested the patients' ability to recognize biological motion by presenting characteristic human movements ('point-light-walker'). The data revealed largely preserved perception of biological motion, while the perception of motion-defined shapes was impaired. Our findings suggest separate mechanisms underlying the recognition of biological motion and shapes defined by coherently moving dots. They thus argue against a restriction in the overall capacity of visual working memory over time as a general explanation for the impaired global shape recognition in patients with simultanagnosia.

  3. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  4. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study

    Science.gov (United States)

    Kumar, G. Vinodh; Halder, Tamesh; Jaiswal, Amit K.; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus

  5. The temporal characteristics of Ca2+ entry through L-type and T-type Ca2+ channels shape exocytosis efficiency in chick auditory hair cells during development.

    Science.gov (United States)

    Levic, Snezana; Dulon, Didier

    2012-12-01

    During development, synaptic exocytosis by cochlear hair cells is first initiated by patterned spontaneous Ca(2+) spikes and, at the onset of hearing, by sound-driven graded depolarizing potentials. The molecular reorganization occurring in the hair cell synaptic machinery during this developmental transition still remains elusive. We characterized the changes in biophysical properties of voltage-gated Ca(2+) currents and exocytosis in developing auditory hair cells of a precocial animal, the domestic chick. We found that immature chick hair cells (embryonic days 10-12) use two types of Ca(2+) currents to control exocytosis: low-voltage-activating, rapidly inactivating (mibefradil sensitive) T-type Ca(2+) currents and high-voltage-activating, noninactivating (nifedipine sensitive) L-type currents. Exocytosis evoked by T-type Ca(2+) current displayed a fast release component (RRP) but lacked the slow sustained release component (SRP), suggesting an inefficient recruitment of distant synaptic vesicles by this transient Ca(2+) current. With maturation, the participation of L-type Ca(2+) currents to exocytosis largely increased, inducing a highly Ca(2+) efficient recruitment of an RRP and an SRP component. Notably, L-type-driven exocytosis in immature hair cells displayed higher Ca(2+) efficiency when triggered by prerecorded native action potentials than by voltage steps, whereas similar efficiency for both protocols was found in mature hair cells. This difference likely reflects a tighter coupling between release sites and Ca(2+) channels in mature hair cells. Overall, our results suggest that the temporal characteristics of Ca(2+) entry through T-type and L-type Ca(2+) channels greatly influence synaptic release by hair cells during cochlear development.

  6. Temporal integration of loudness in listeners with hearing losses of primarily cochlear origin

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1999-01-01

    -frequency hearing losses (slopes >50 dB/octave) showed larger-than-normal maximal amounts of temporal integration (40 to 50 dB). This finding is consistent with the shallow loudness functions predicted by our excitation-pattern model for impaired listeners [, in Modeling Sensorineural Hearing Loss, edited by W....... Jesteadt (Erlbaum, Mahwah, NJ, 1997), pp. 187–198]. Loudness functions derived from impaired listeners' temporal-integration functions indicate that restoration of loudness in listeners with cochlear hearing loss usually will require the same gain whether the sound is short or long. ©1999 Acoustical...

  7. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  8. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  9. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  10. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  11. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  12. Multimodal integration of micro-Doppler sonar and auditory signals for behavior classification with convolutional networks.

    Science.gov (United States)

    Dura-Bernal, Salvador; Garreau, Guillaume; Georgiou, Julius; Andreou, Andreas G; Denham, Susan L; Wennekers, Thomas

    2013-10-01

    The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the system's performance.

  13. Comparing the influence of spectro-temporal integration in computational speech segregation

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2016-01-01

    The goal of computational speech segregation systems is to automatically segregate a target speaker from interfering maskers. Typically, these systems include a feature extraction stage in the front-end and a classification stage in the back-end. A spectrotemporal integration strategy can...... metric that comprehensively predicts computational segregation performance and correlates well with intelligibility. The outcome of this study could help to identify the most effective spectro-temporal integration strategy for computational segregation systems....

  14. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  15. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration.

  16. Differential roles for left inferior frontal and superior temporal cortex in multimodal integration of action and language

    NARCIS (Netherlands)

    Willems, R.M.; Özyürek, A.; Hagoort, P.

    2009-01-01

    Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and l

  17. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  18. Temporal structure in audiovisual sensory selection.

    Directory of Open Access Journals (Sweden)

    Anne Kösem

    Full Text Available In natural environments, sensory information is embedded in temporally contiguous streams of events. This is typically the case when seeing and listening to a speaker or when engaged in scene analysis. In such contexts, two mechanisms are needed to single out and build a reliable representation of an event (or object: the temporal parsing of information and the selection of relevant information in the stream. It has previously been shown that rhythmic events naturally build temporal expectations that improve sensory processing at predictable points in time. Here, we asked to which extent temporal regularities can improve the detection and identification of events across sensory modalities. To do so, we used a dynamic visual conjunction search task accompanied by auditory cues synchronized or not with the color change of the target (horizontal or vertical bar. Sounds synchronized with the visual target improved search efficiency for temporal rates below 1.4 Hz but did not affect efficiency above that stimulation rate. Desynchronized auditory cues consistently impaired visual search below 3.3 Hz. Our results are interpreted in the context of the Dynamic Attending Theory: specifically, we suggest that a cognitive operation structures events in time irrespective of the sensory modality of input. Our results further support and specify recent neurophysiological findings by showing strong temporal selectivity for audiovisual integration in the auditory-driven improvement of visual search efficiency.

  19. Comparing the influence of spectro-temporal integration in computational speech segregation

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne;

    2016-01-01

    The goal of computational speech segregation systems is to automatically segregate a target speaker from interfering maskers. Typically, these systems include a feature extraction stage in the front-end and a classification stage in the back-end. A spectrotemporal integration strategy can...... be applied in either the frontend, using the so-called delta features, or in the back-end, using a second classifier that exploits the posterior probability of speech from the first classifier across a spectro-temporal window. This study systematically analyzes the influence of such stages on segregation...... metric that comprehensively predicts computational segregation performance and correlates well with intelligibility. The outcome of this study could help to identify the most effective spectro-temporal integration strategy for computational segregation systems....

  20. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  1. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  2. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  3. Multisensory Illusions and the Temporal Binding Window

    Directory of Open Access Journals (Sweden)

    Ryan A Stevenson

    2011-10-01

    Full Text Available The ability of our sensory systems to merge sensory information from distinct modalities is remarkable. One stimulus characteristic utilized in this operation is temporal coincidence. Auditory and visual information are integrated within a narrow range of temporal offsets, known as the temporal binding window (TBW, which varies between individuals, stimulus type, and task. In this series of experiments, we assessed the relationship within individuals between the width of their TBW and their ability to integrate audiovisual information. The TBW was measured through a perceived subjective simultaneity task. In conjunction with this, we measured each individual's ability to integrate auditory and visual information with two multisensory illusions, the McGurk effect and Flashbeep illusion. The results from these studies demonstrate that the TBW is highly correlated with the individual's ability to integrate. These relationships were seen in only the right TBW, in which visual presentations preceded auditory presentations, a finding that is ecologically logical. However, differences were seen between the two illusory conditions, where the McGurk effect was stronger in individuals with narrow TBWs, again, an ecologically logical finding. The opposite relationship was seen with flashbeep illusion, possibly due to inherent asynchronies in the illusion.

  4. Integrity of medial temporal structures may predict better improvement of spatial neglect with prism adaptation treatment.

    Science.gov (United States)

    Chen, Peii; Goedert, Kelly M; Shah, Priyanka; Foundas, Anne L; Barrett, A M

    2014-09-01

    Prism adaptation treatment (PAT) is a promising rehabilitative method for functional recovery in persons with spatial neglect. Previous research suggests that PAT improves motor-intentional "aiming" deficits that frequently occur with frontal lesions. To test whether presence of frontal lesions predicted better improvement of spatial neglect after PAT, the current study evaluated neglect-specific improvement in functional activities (assessment with the Catherine Bergego Scale) over time in 21 right-brain-damaged stroke survivors with left-sided spatial neglect. The results demonstrated that neglect patients' functional activities improved after two weeks of PAT and continued improving for four weeks. Such functional improvement did not occur equally in all of the participants: Neglect patients with lesions involving the frontal cortex (n = 13) experienced significantly better functional improvement than did those without frontal lesions (n = 8). More importantly, voxel-based lesion-behavior mapping (VLBM) revealed that in comparison to the group of patients without frontal lesions, the frontal-lesioned neglect patients had intact regions in the medial temporal areas, the superior temporal areas, and the inferior longitudinal fasciculus. The medial cortical and subcortical areas in the temporal lobe were especially distinguished in the "frontal lesion" group. The findings suggest that the integrity of medial temporal structures may play an important role in supporting functional improvement after PAT.

  5. [Towards an integrated approach to infantile autism: the superior temporal lobe between neurosciences and psychoanalysis].

    Science.gov (United States)

    Golse, Bernard; Robel, Laurence

    2009-02-01

    The superior temporal lobe is currently at the focus of intensive research in infantile autism, a psychopathologic disorder apparently representing the severest failure of access to intersubjectivity, i.e. the ability to accept that others exist independently of oneself. Access to intersubjectivity seems to involve the superior temporal lobe, which is the seat of several relevant functions such as face and voice recognition and perception of others' movements, and coordinates the different sensory inputs that identify an object as being "external". The psychoanalytic approach to infantile autism and recent cognitive data are now converging, and intersubjectivity is considered to result from "mantling" or comodalization of sensory inputs from external objects. Recent brain neuroimaging studies point to anatomic and functional abnormalities of the superior temporal lobe in autistic children. Dialogue is therefore possible between these different disciplines, opening the way to an integrated view of infantile autism in which the superior temporal lobe holds a central place--not necessarily as a primary cause of autism but rather as an intermediary or a reflection of autistic functioning

  6. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  7. Recording of electrically evoked auditory brainstem responses (E-ABR) with an integrated stimulus generator in Matlab.

    Science.gov (United States)

    Bahmer, Andreas; Peter, Otto; Baumann, Uwe

    2008-08-30

    Electrical auditory brainstem responses (E-ABRs) of subjects with cochlear implants are used for monitoring the physiologic responses of early signal processing of the auditory system. Additionally, E-ABR measurements allow the diagnosis of retro-cochlear diseases. Therefore, E-ABR should be available in every cochlear implant center as a diagnostic tool. In this paper, we introduce a low-cost setup designed to perform an E-ABR as well as a conventional ABR for research purposes. The distributable form was developed with Matlab and the Matlab Compiler (The Mathworks Inc.). For the ABR, only a PC with a soundcard, conventional system headphones, and an EEG pre-amplifier are necessary; for E-ABR, in addition, an interface to the cochlea implant is required. For our purposes, we implemented an interface for the Combi 40+/Pulsar implant (MED-EL, Innsbruck).

  8. Integrating Temporal and Spectral Features of Astronomical Data Using Wavelet Analysis for Source Classification

    CERN Document Server

    Ukwatta, T N

    2016-01-01

    Temporal and spectral information extracted from a stream of photons received from astronomical sources is the foundation on which we build understanding of various objects and processes in the Universe. Typically astronomers fit a number of models separately to light curves and spectra to extract relevant features. These features are then used to classify, identify, and understand the nature of the sources. However, these feature extraction methods may not be optimally sensitive to unknown properties of light curves and spectra. One can use the raw light curves and spectra as features to train classifiers, but this typically increases the dimensionality of the problem, often by several orders of magnitude. We overcome this problem by integrating light curves and spectra to create an abstract image and using wavelet analysis to extract important features from the image. Such features incorporate both temporal and spectral properties of the astronomical data. Classification is then performed on those abstract ...

  9. Daytime Sleepiness Is Associated With Reduced Integration of Temporally Distant Outcomes on the Iowa Gambling Task.

    Science.gov (United States)

    Olson, Elizabeth A; Weber, Mareen; Rauch, Scott L; Killgore, William D S

    2016-01-01

    Sleep deprivation is associated with performance decrements on some measures of executive functioning. For instance, sleep deprivation results in altered decision making on the Iowa Gambling Task. However, it is unclear which component processes of the task may be driving the effect. In this study, Iowa Gambling task performance was decomposed using the Expectancy-Valence model. Recent sleep debt and greater daytime sleepiness were associated with higher scores on the updating parameter, which reflects the extent to which recent experiences are emphasized over remote ones. Findings suggest that the effects of insufficient sleep on IGT performance are due to shortening of the time horizon over which decisions are integrated. These findings may have clinical implications in that individuals with sleep problems may not integrate more temporally distant information when making decisions.

  10. The integration of song environment by catecholaminergic systems innervating the auditory telencephalon of adult female European starlings.

    Science.gov (United States)

    Sockman, Keith W; Salvante, Katrina G

    2008-04-01

    Mate choice is among the most consequential decisions a sexually reproducing organism can make. In many songbird species, females make mate-choice decisions based, in part, on variation between males in songs that reflect their quality. Importantly, females may adjust their choice relative to the prevalence of high quality songs. In European starlings (Sturnus vulgaris), females prefer males that primarily sing long songs over those that primarily sing short songs, and sensitivity of the auditory telencephalon to song length depends on the prevalence of long songs in the environment. Several lines of evidence suggest a role for noradrenergic innervation of the auditory telencephalon in mediating this neuro- and behavioral plasticity. To simulate variation in quality of the song environment, we exposed adult female starlings to 1 week of either long or short songs and then quantified several monoamines and their metabolites in the caudomedial mesopallium and caudomedial nidopallium (NCM) using high performance liquid chromatography. We also used immunocytochemistry to assess these areas for immunoreactive dopamine-beta-hydroxylase (DBH-ir), the enzyme that synthesizes norepinephrine. We found that long songs elevated levels of the principal norepinephrine metabolite, the principal dopamine metabolite, and the probability of DBH-ir in the NCM compared to short songs. Song environment did not appear to influence norepinephrine or dopamine levels. Thus, the quality of the song environment regulates the local secretion of catecholamines, particularly norepinephrine, in the female auditory telencephalon. This may form a basis for plasticity in forebrain sensitivity and mate-choice behavior based on the prevalence of high-quality males.

  11. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. Temporal structure and complexity affect audio-visual correspondence detection

    Directory of Open Access Journals (Sweden)

    Rachel N Denison

    2013-01-01

    Full Text Available Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration.

  14. Estimation of Spatially and Temporally Varied Groundwater Recharge from Precipitation Using a Systematic and Integrated Approach

    Science.gov (United States)

    Wang, M.

    2006-05-01

    Quantitative determination of spatially and temporally varied groundwater recharge from precipitation is a complex issue involving many control factors, and investigators face great challenges for quantifying the relationship between groundwater recharge and its control factors. In fact, its quantification is a complex process in which unstructured decisions are generally involved. The Analytic Hierarchy Process (AHP) is a systematic method for a powerful and flexible decision making to determine priorities and make the best decision when both qualitative and quantitative aspects of a decision need to be accounted for. Moreover, through a process of reducing complex decisions to a series of one-on-one comparisons, then synthesizing the results, the rationale can clearly be understood for making the best decision. In this study, a systematic and integrated approach for estimation of spatially and temporally varied groundwater recharge from precipitation is proposed, in which the remote sensing, GIS, AHP, and modeling techniques are coupled. A case study is presented for demonstration of its application. Based on field survey and information analyses, the pertinent factors for groundwater recharge are assessed and the dominating factors are identified. An analytical model is then established for estimation of the spatially and temporally varied groundwater recharge from precipitation in which the contribution potentials to groundwater recharge and relative weights of those dominating factors are taken into account. The contribution potentials can be assessed by adopting fuzzy membership functions and integrating expert opinions. The weight for each of the dominating factors can systematically be determined through coupling of the RS, GIS, and AHP techniques. To reduce model uncertainty, this model should be further calibrated systematically and validated using even limited groundwater field data such as observed groundwater heads and groundwater discharges into

  15. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  16. Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training.

    Science.gov (United States)

    Chen, Joyce L; Penhune, Virginia B; Zatorre, Robert J

    2008-02-01

    Much is known about the motor system and its role in simple movement execution. However, little is understood about the neural systems underlying auditory-motor integration in the context of musical rhythm, or the enhanced ability of musicians to execute precisely timed sequences. Using functional magnetic resonance imaging, we investigated how performance and neural activity were modulated as musicians and nonmusicians tapped in synchrony with progressively more complex and less metrically structured auditory rhythms. A functionally connected network was implicated in extracting higher-order features of a rhythm's temporal structure, with the dorsal premotor cortex mediating these auditory-motor interactions. In contrast to past studies, musicians recruited the prefrontal cortex to a greater degree than nonmusicians, whereas secondary motor regions were recruited to the same extent. We argue that the superior ability of musicians to deconstruct and organize a rhythm's temporal structure relates to the greater involvement of the prefrontal cortex mediating working memory.

  17. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis generat

  18. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model adaptation. Single-frame pose recovery consists of a hypothesis generat

  19. Convergent validity of the Integrated Visual and Auditory Continuous Performance Test (IVA+Plus): associations with working memory, processing speed, and behavioral ratings.

    Science.gov (United States)

    Arble, Eamonn; Kuentzel, Jeffrey; Barnett, Douglas

    2014-05-01

    Though the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus) is commonly used by researchers and clinicians, few investigations have assessed its convergent and discriminant validity, especially with regard to its use with children. The present study details correlates of the IVA + Plus using measures of cognitive ability and ratings of child behavior (parent and teacher), drawing upon a sample of 90 psychoeducational evaluations. Scores from the IVA + Plus correlated significantly with the Working Memory and Processing Speed Indexes from the Fourth Edition of the Wechsler Intelligence Scales for Children (WISC-IV), though fewer and weaker significant correlations were seen with behavior ratings scales, and significant associations also occurred with WISC-IV Verbal Comprehension and Perceptual Reasoning. The overall pattern of relations is supportive of the validity of the IVA + Plus; however, general cognitive ability was associated with better performance on most of the primary scores of the IVA + Plus, suggesting that interpretation should take intelligence into account.

  20. Nonretinotopic perception of orientation: Temporal integration of basic features operates in object-based coordinates.

    Science.gov (United States)

    Wutz, Andreas; Drewes, Jan; Melcher, David

    2016-08-01

    Early, feed-forward visual processing is organized in a retinotopic reference frame. In contrast, visual feature integration on longer time scales can involve object-based or spatiotopic coordinates. For example, in the Ternus-Pikler (T-P) apparent motion display, object identity is mapped across the object motion path. Here, we report evidence from three experiments supporting nonretinotopic feature integration even for the most paradigmatic example of retinotopically-defined features: orientation. We presented observers with a repeated series of T-P displays in which the perceived rotation of Gabor gratings indicates processing in either retinotopic or object-based coordinates. In Experiment 1, the frequency of perceived retinotopic rotations decreased exponentially for longer interstimulus intervals (ISIs) between T-P display frames, with object-based percepts dominating after about 150-250 ms. In a second experiment, we show that motion and rotation judgments depend on the perception of a moving object during the T-P display ISIs rather than only on temporal factors. In Experiment 3, we cued the observers' attentional state either toward a retinotopic or object motion-based reference frame and then tracked both the observers' eye position and the time course of the perceptual bias while viewing identical T-P display sequences. Overall, we report novel evidence for spatiotemporal integration of even basic visual features such as orientation in nonretinotopic coordinates, in order to support perceptual constancy across self- and object motion.

  1. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    Science.gov (United States)

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  2. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  3. Temporal integration of the pi 1/pi 3 pathway in normal and dichromatic vision.

    Science.gov (United States)

    Friedman, L J; Yim, M H; Pugh, E N

    1984-01-01

    Stiles' pi 1 and pi 3 mechanisms are thought to reflect adaptation events at two sites in a single pathway, the first site controlled by the short-wavelength cones alone, the second site controlled by opposing signals from these cones vs the other cone classes. We examined this pathway's temporal integration under conditions that yield the full gamut of possible adaptation states at the two sites. Critical duration of the pi 1/pi 3 pathway was always about 200 msec. In addition, we examined the pi 1 and pi 3 mechanisms of dichromatic vision. Our results suggest that protanopic and deuteranopic vision are characterized by a pi 1/pi 3 pathway similar to that in normal color vision.

  4. Temporal integration near threshold fine structure - The role of cochlear processing

    DEFF Research Database (Denmark)

    Epp, Bastian; Mauermann, Manfred; Verhey, Jesko L.

    structure, but lack a decrease of thresholds with increased pulse duration. The model was extended by including a temporal integrator which introduces a low-pass behavior of the data with different slopes of the predicted threshold curves, producing good agreement with the data. On the basis of the model......The hearing thresholds of normal hearing listeners often show quasi-periodic variations when measured with a high frequency resolution. This hearing threshold fine structure is related to other frequency specific variations in the perception of sound such as loudness and amplitude modulated tones...... at low intensities. The detection threshold of a pulsed tone also depends not only on the pulse duration, but also on the position of its frequency within threshold fine structure. The present study investigates if psychoacoustical data on detection of a pulsed tone can be explained with a nonlinear...

  5. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  6. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  7. Integrating spatial and temporal oxygen data to improve the quantification of in situ petroleum biodegradation rates.

    Science.gov (United States)

    Davis, Gregory B; Laslett, Dean; Patterson, Bradley M; Johnston, Colin D

    2013-03-15

    Accurate estimation of biodegradation rates during remediation of petroleum impacted soil and groundwater is critical to avoid excessive costs and to ensure remedial effectiveness. Oxygen depth profiles or oxygen consumption over time are often used separately to estimate the magnitude and timeframe for biodegradation of petroleum hydrocarbons in soil and subsurface environments. Each method has limitations. Here we integrate spatial and temporal oxygen concentration data from a field experiment to develop better estimates and more reliably quantify biodegradation rates. During a nine-month bioremediation trial, 84 sets of respiration rate data (where aeration was halted and oxygen consumption was measured over time) were collected from in situ oxygen sensors at multiple locations and depths across a diesel non-aqueous phase liquid (NAPL) contaminated subsurface. Additionally, detailed vertical soil moisture (air-filled porosity) and NAPL content profiles were determined. The spatial and temporal oxygen concentration (respiration) data were modeled assuming one-dimensional diffusion of oxygen through the soil profile which was open to the atmosphere. Point and vertically averaged biodegradation rates were determined, and compared to modeled data from a previous field trial. Point estimates of biodegradation rates assuming no diffusion ranged up to 58 mg kg(-1) day(-1) while rates accounting for diffusion ranged up to 87 mg kg(-1) day(-1). Typically, accounting for diffusion increased point biodegradation rate estimates by 15-75% and vertically averaged rates by 60-80% depending on the averaging method adopted. Importantly, ignoring diffusion led to overestimation of biodegradation rates where the location of measurement was outside the zone of NAPL contamination. Over or underestimation of biodegradation rate estimates leads to cost implications for successful remediation of petroleum impacted sites.

  8. Visual and Auditory Synchronization Deficits Among Dyslexic Readers as Compared to Non-impaired Readers: A Cross-Correlation Algorithm Analysis

    Directory of Open Access Journals (Sweden)

    Itamar eSela

    2014-06-01

    Full Text Available Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing gap (Asynchrony between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time procedure where participants were asked to identify whhether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, reaction time, and Event Related Potential (ERP measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal speed of processing of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170- 240 ms after stimulus presentation, increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal speed of processing of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia.

  9. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  10. 2 years of integral monitoring of GRS 1915+105. II. X-ray spectro-temporal analysis

    DEFF Research Database (Denmark)

    Rodriguez, J.; Shaw, S.E.; Hannikainen, D.C.;

    2008-01-01

    This is the second paper presenting the results of 2 yr of monitoring of GRS 1915+105 with INTEGRAL, RXTE, and the Ryle Telescope. We present the X-ray spectral and temporal analysis of four observations showing strong radio to X-ray correlations. During one observation GRS 1915+105 was in a steady...

  11. On the relations among temporal integration for loudness, loudness discrimination, and the form of the loudness function. (A)

    DEFF Research Database (Denmark)

    Poulsen, Torben; Buus, Søren; Florentine, M

    1996-01-01

    of two equal-duration tones, they do not appear to depend on duration. The level dependence of temporal integration and the loudness jnds are consistent with a loudness function [log(loudness) versus SPL] that is flatter at moderate levels than at low and high levels. [Work supported by NIH-NIDCD R01DC...

  12. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.

  13. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  14. Time-filtered leapfrog integration of Maxwell equations using unstaggered temporal grids

    Science.gov (United States)

    Mahalov, A.; Moustaoui, M.

    2016-11-01

    A finite-difference time-domain method for integration of Maxwell equations is presented. The computational algorithm is based on the leapfrog time stepping scheme with unstaggered temporal grids. It uses a fourth-order implicit time filter that reduces computational modes and fourth-order finite difference approximations for spatial derivatives. The method can be applied within both staggered and collocated spatial grids. It has the advantage of allowing explicit treatment of terms involving electric current density and application of selective numerical smoothing which can be used to smooth out errors generated by finite differencing. In addition, the method does not require iteration of the electric constitutive relation in nonlinear electromagnetic propagation problems. The numerical method is shown to be effective and stable when employed within Perfectly Matched Layers (PML). Stability analysis demonstrates that the proposed method is effective in stabilizing and controlling numerical instabilities of computational modes arising in wave propagation problems with physical damping and artificial smoothing terms while maintaining higher accuracy for the physical modes. Comparison of simulation results obtained from the proposed method and those computed by the classical time filtered leapfrog, where Maxwell equations are integrated for a lossy medium, within PML regions and for Kerr-nonlinear media show that the proposed method is robust and accurate. The performance of the computational algorithm is also verified by analyzing parametric four wave mixing in an optical nonlinear Kerr medium. The algorithm is found to accurately predict frequencies and amplitudes of nonlinearly converted waves under realistic conditions proposed in the literature.

  15. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  16. Two visual targets for the price of one? Pupil dilation shows reduced mental effort through temporal integration.

    Science.gov (United States)

    Wolff, Michael J; Scholz, Sabine; Akyürek, Elkan G; van Rijn, Hedderik

    2015-02-01

    In dynamic sensory environments, successive stimuli may be combined perceptually and represented as a single, comprehensive event by means of temporal integration. Such perceptual segmentation across time is intuitively plausible. However, the possible costs and benefits of temporal integration in perception remain underspecified. In the present study pupil dilation was analyzed as a measure of mental effort. Observers viewed either one or two successive targets amidst distractors in rapid serial visual presentation, which they were asked to identify. Pupil dilation was examined dependent on participants' report: dilation associated with the report of a single target, of two targets, and of an integrated percept consisting of the features of both targets. There was a clear distinction between dilation observed for single-target reports and integrations on the one side, and two-target reports on the other. Regardless of report order, two-target reports produced increased pupil dilation, reflecting increased mental effort. The results thus suggested that temporal integration reduces mental effort and may thereby facilitate perceptual processing.

  17. Preparation and Culture of Chicken Auditory Brainstem Slices

    OpenAIRE

    Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W; Barria, Andres

    2011-01-01

    The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.

  18. Auditory Backward Masking Deficits in Children with Reading Disabilities

    Science.gov (United States)

    Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.

    2005-01-01

    Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…

  19. Improving Depiction of Temporal Bone Anatomy With Low-Radiation Dose CT by an Integrated Circuit Detector in Pediatric Patients

    Science.gov (United States)

    He, Jingzhen; Zu, Yuliang; Wang, Qing; Ma, Xiangxing

    2014-01-01

    Abstract The purpose of this study was to determine the performance of low-dose computed tomography (CT) scanning with integrated circuit (IC) detector in defining fine structures of temporal bone in children by comparing with the conventional detector. The study was performed with the approval of our institutional review board and the patients’ anonymity was maintained. A total of 86 children  0.05). The low-dose CT images acquired with the IC detector provide better depiction of fine osseous structures of temporal bone than that with the conventional DC detector. PMID:25526489

  20. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  1. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy Avaliação eletrofisiológica e comportamental da audição em individuos com epilepsia em lobo temporal esquerdo

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha

    2010-02-01

    Full Text Available The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE for subjects with left mesial temporal sclerosis (LMTS in relation to the behavioral test-Dichotic Digits Test (DDT, event-related potential (P300, and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.O objetivo deste estudo foi determinar a repercussão da epilepsia de lobo temporal esquerdo (LTE em indivíduos com esclerose mesial temporal esquerda (EMTE em relação à avaliação auditiva comportamental-Teste Dicótico de Dígitos (TDD, ao Potencial Evocado Auditivo de Longa Latência (P300 e comparar o P300 do lobo temporal esquerdo e direito. Estudamos 12 indivíduos com EMTE (grupo estudo e 12 indivíduos controle com desenvolvimento típico. Analisamos as relações entre a latência e amplitude do P300, obtidos nas posições C3A1,C3A2,C4A1 e C4A2 e os resultados obtidos no TDD. No TDD, o grupo estudo apresentou pior desempenho em relação ao grupo controle, sendo esta diferença estatisticamente significante em ambas as orelhas. Para o P300, observamos que em seis indivíduos com EMTE o potencial foi ausente. Para a latência e amplitude, verificamos que estes

  2. Moving in time: Bayesian causal inference explains movement coordination to auditory beats.

    Science.gov (United States)

    Elliott, Mark T; Wing, Alan M; Welchman, Andrew E

    2014-07-07

    Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.

  3. Bilateral collicular interaction: modulation of auditory signal processing in frequency domain.

    Science.gov (United States)

    Cheng, L; Mei, H-X; Tang, J; Fu, Z-Y; Jen, P H-S; Chen, Q-C

    2013-04-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from a variety of lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in the modulation of frequency-domain signal processing of mice using electrophysiological recording and focal electrical stimulation. Focal electrical stimulation of neurons in one IC produces widespread inhibition and focused facilitation of responses of neurons in the other IC. This bilateral collicular interaction decreases the response magnitude and lengthens the response latency of inhibited IC neurons but produces an opposite effect on the response of facilitated IC neurons. In the frequency domain, the focal electrical stimulation of one IC sharpens or expands the frequency tuning curves (FTCs) of neurons in the other IC to improve frequency sensitivity and the frequency response range. The focal electrical stimulation also produces a shift in the best frequency (BF) of modulated IC (ICMdu) neurons toward that of electrically stimulated IC (ICES) neurons. The degree of bilateral collicular interaction is dependent upon the difference in the BF between the ICES neurons and ICMdu neurons. These data suggest that bilateral collicular interaction is a part of dynamic acoustic signal processing that adjusts and improves signal processing as well as reorganizes collicular representation of signal parameters according to the acoustic experience.

  4. Representing Representation: Integration between the Temporal Lobe and the Posterior Cingulate Influences the Content and Form of Spontaneous Thought.

    Directory of Open Access Journals (Sweden)

    Jonathan Smallwood

    Full Text Available When not engaged in the moment, we often spontaneously represent people, places and events that are not present in the environment. Although this capacity has been linked to the default mode network (DMN, it remains unclear how interactions between the nodes of this network give rise to particular mental experiences during spontaneous thought. One hypothesis is that the core of the DMN integrates information from medial and lateral temporal lobe memory systems, which represent different aspects of knowledge. Individual differences in the connectivity between temporal lobe regions and the default mode network core would then predict differences in the content and form of people's spontaneous thoughts. This study tested this hypothesis by examining the relationship between seed-based functional connectivity and the contents of spontaneous thought recorded in a laboratory study several days later. Variations in connectivity from both medial and lateral temporal lobe regions was associated with different patterns of spontaneous thought and these effects converged on an overlapping region in the posterior cingulate cortex. We propose that the posterior core of the DMN acts as a representational hub that integrates information represented in medial and lateral temporal lobe and this process is important in determining the content and form of spontaneous thought.

  5. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  6. The Global Food Price Crisis and China-World Rice Market Integration: A Spatial-Temporal Rational Expectations Equilibrium Model

    OpenAIRE

    Liu, Xianglin; Romero-Aguilar, Randall S.; Chen, Shu-Ling; Miranda, Mario J.

    2013-01-01

    In this paper, we examine how China, the world’s largest rice producer and consumer, would affect the international rice market if it liberalized its trade in rice and became more fully integrated into the global rice market. The impacts of trade liberalization are estimated using a spatial-temporal rational expectations model of the world rice market characterized by four interdependent markets with stochastic production patterns, constant-elasticity demands, expected-profit maximizing priva...

  7. Regional heavy metal pollution in crops by integrating physiological function variability with spatio-temporal stability using multi-temporal thermal remote sensing

    Science.gov (United States)

    Liu, Meiling; Liu, Xiangnan; Zhang, Biyao; Ding, Chao

    2016-09-01

    Heavy metal stress in crops is characterized by stability in space and time, which differs from other stressors that are typically more transient (e.g., drought, pests/diseases, and mismanagement). The objective of this study is to assess regional heavy metal stress in rice by integrating physiological function variability with spatio-temporal stability based on multi-temporal thermal infrared (TIR) remote sensing images. The field in which the experiment was conducted is located in Zhuzhou City, Hunan Province, China. HJ-1B images and in-situ measured data were collected from rice growing in heavy metal contaminated soils. A stress index (SI) was devised as an indicator for the degree of heavy metal stress of the rice in different growth stages, and a time-spectrum feature space (TSFS) model was used to determine rice heavy metal stress levels. The results indicate that (i) SI is a good indicator of rice damage caused by heavy metal stress. Minimum values of SI occur in rice subject to high pollution, followed by larger SI with medium pollution and maximum SI for low pollution, for the same growth stage. (ii) SI shows some variation for different growth stages of rice, and the minimum SI occurs at the flowering stage. (iii) The TSFS model is successful at identifying rice heavy metal stress, and stress levels in rice stabilized regardless of the model being applied in the two different years. This study suggests that regional heavy metal stress in crops can be accurately detected using TIR technology, if a sensitive indicator of crop physiological function impairment is used and an effective model is selected. A combination of spectrum and spatio-temporal information appears to be a very promising method for monitoring crops with various stressors.

  8. Impairments of auditory scene analysis in Alzheimer's disease.

    Science.gov (United States)

    Goll, Johanna C; Kim, Lois G; Ridgway, Gerard R; Hailstone, Julia C; Lehmann, Manja; Buckley, Aisling H; Crutch, Sebastian J; Warren, Jason D

    2012-01-01

    Parsing of sound sources in the auditory environment or 'auditory scene analysis' is a computationally demanding cognitive operation that is likely to be vulnerable to the neurodegenerative process in Alzheimer's disease. However, little information is available concerning auditory scene analysis in Alzheimer's disease. Here we undertook a detailed neuropsychological and neuroanatomical characterization of auditory scene analysis in a cohort of 21 patients with clinically typical Alzheimer's disease versus age-matched healthy control subjects. We designed a novel auditory dual stream paradigm based on synthetic sound sequences to assess two key generic operations in auditory scene analysis (object segregation and grouping) in relation to simpler auditory perceptual, task and general neuropsychological factors. In order to assess neuroanatomical associations of performance on auditory scene analysis tasks, structural brain magnetic resonance imaging data from the patient cohort were analysed using voxel-based morphometry. Compared with healthy controls, patients with Alzheimer's disease had impairments of auditory scene analysis, and segregation and grouping operations were comparably affected. Auditory scene analysis impairments in Alzheimer's disease were not wholly attributable to simple auditory perceptual or task factors; however, the between-group difference relative to healthy controls was attenuated after accounting for non-verbal (visuospatial) working memory capacity. These findings demonstrate that clinically typical Alzheimer's disease is associated with a generic deficit of auditory scene analysis. Neuroanatomical associations of auditory scene analysis performance were identified in posterior cortical areas including the posterior superior temporal lobes and posterior cingulate. This work suggests a basis for understanding a class of clinical symptoms in Alzheimer's disease and for delineating cognitive mechanisms that mediate auditory scene analysis

  9. Integrating cross-scale analysis in the spatial and temporal domains for classification of behavioral movement

    Directory of Open Access Journals (Sweden)

    Ali Soleymani

    2014-06-01

    Full Text Available Since various behavioral movement patterns are likely to be valid within different, unique ranges of spatial and temporal scales (e.g., instantaneous, diurnal, or seasonal with the corresponding spatial extents, a cross-scale approach is needed for accurate classification of behaviors expressed in movement. Here, we introduce a methodology for the characterization and classification of behavioral movement data that relies on computing and analyzing movement features jointly in both the spatial and temporal domains. The proposed methodology consists of three stages. In the first stage, focusing on the spatial domain, the underlying movement space is partitioned into several zonings that correspond to different spatial scales, and features related to movement are computed for each partitioning level. In the second stage, concentrating on the temporal domain, several movement parameters are computed from trajectories across a series of temporal windows of increasing sizes, yielding another set of input features for the classification. For both the spatial and the temporal domains, the ``reliable scale'' is determined by an automated procedure. This is the scale at which the best classification accuracy is achieved, using only spatial or temporal input features, respectively. The third stage takes the measures from the spatial and temporal domains of movement, computed at the corresponding reliable scales, as input features for behavioral classification. With a feature selection procedure, the most relevant features contributing to known behavioral states are extracted and used to learn a classification model. The potential of the proposed approach is demonstrated on a dataset of adult zebrafish (Danio rerio swimming movements in testing tanks, following exposure to different drug treatments. Our results show that behavioral classification accuracy greatly increases when firstly cross-scale analysis is used to determine the best analysis scale, and

  10. MULTIMODAL INFORMATION FUSION AND TEMPORAL INTEGRATION FOR VIOLENCE DETECTION IN MOVIES

    OpenAIRE

    Penet, Cédric; Demarty, Claire-Hélène; Gravier, Guillaume; Gros, Patrick

    2012-01-01

    International audience; This paper presents a violent shots detection system that studies several methods for introducing temporal and multimodal information in the framework. It also investigates different kinds of Bayesian network structure learning algorithms for modelling these problems. The system is trained and tested using the MediaEval 2011 Affect Task corpus, which comprises of 15 Hollywood movies. It is experimentally shown that both multimodality and temporality add interesting inf...

  11. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  12. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  13. Design of all-optical high-order temporal integrators based on multiple-phase-shifted Bragg gratings.

    Science.gov (United States)

    Asghari, Mohammad H; Azaña, José

    2008-07-21

    In exact analogy with their electronic counterparts, photonic temporal integrators are fundamental building blocks for constructing all-optical circuits for ultrafast information processing and computing. In this work, we introduce a simple and general approach for realizing all-optical arbitrary-order temporal integrators. We demonstrate that the N(th) cumulative time integral of the complex field envelope of an input optical waveform can be obtained by simply propagating this waveform through a single uniform fiber/waveguide Bragg grating (BG) incorporating N pi-phase shifts along its axial profile. We derive here the design specifications of photonic integrators based on multiple-phase-shifted BGs. We show that the phase shifts in the BG structure can be arbitrarily located along the grating length provided that each uniform grating section (sections separated by the phase shifts) is sufficiently long so that its associated peak reflectivity reaches nearly 100%. The resulting designs are demonstrated by numerical simulations assuming all-fiber implementations. Our simulations show that the proposed approach can provide optical operation bandwidths in the tens-of-GHz regime using readily feasible photo-induced fiber BG structures.

  14. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  15. Proprioceptive cues modulate further processing of spatially congruent auditory information. a high-density EEG study.

    Science.gov (United States)

    Simon-Dack, S L; Teder-Sälejärvi, W A

    2008-07-18

    Multisensory integration and interaction occur when bimodal stimuli are presented as either spatially congruent or incongruent, but temporally coincident. We investigated whether proprioceptive cues interact with auditory attention to one of two sound sources in free-field. The participant's task was to attend to either the left or right speaker and to respond to occasional increased-bandwidth targets via a footswitch. We recorded high-density EEG in three experimental conditions: the participants either held the speakers in their hands (Hold), reached out close to them (Reach), or had their hands in their lap (Lap). In the last two conditions, the auditory event-related potentials (ERPs) revealed a prominent negativity around 200 ms post-stimulus (N2 wave) over fronto-central areas, which is a reliable index of further processing of spatial stimulus features in free-field. The N2 wave was markedly attenuated in the 'Hold' condition, which suggests that proprioceptive cues apparently solidify spatial information computed by the auditory system, in so doing alleviating the need for further processing of spatial coordinates solely based on auditory information.

  16. Neuromodulatory Effects of Auditory Training and Hearing Aid Use on Audiovisual Speech Perception in Elderly Individuals

    Science.gov (United States)

    Yu, Luodi; Rao, Aparna; Zhang, Yang; Burton, Philip C.; Rishiq, Dania; Abrams, Harvey

    2017-01-01

    Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population. PMID:28270763

  17. Asymmetric excitatory synaptic dynamics underlie interaural time difference processing in the auditory system.

    Directory of Open Access Journals (Sweden)

    Pablo E Jercog

    Full Text Available Low-frequency sound localization depends on the neural computation of interaural time differences (ITD and relies on neurons in the auditory brain stem that integrate synaptic inputs delivered by the ipsi- and contralateral auditory pathways that start at the two ears. The first auditory neurons that respond selectively to ITD are found in the medial superior olivary nucleus (MSO. We identified a new mechanism for ITD coding using a brain slice preparation that preserves the binaural inputs to the MSO. There was an internal latency difference for the two excitatory pathways that would, if left uncompensated, position the ITD response function too far outside the physiological range to be useful for estimating ITD. We demonstrate, and support using a biophysically based computational model, that a bilateral asymmetry in excitatory post-synaptic potential (EPSP slopes provides a robust compensatory delay mechanism due to differential activation of low threshold potassium conductance on these inputs and permits MSO neurons to encode physiological ITDs. We suggest, more generally, that the dependence of spike probability on rate of depolarization, as in these auditory neurons, provides a mechanism for temporal order discrimination between EPSPs.

  18. How and when auditory action effects impair motor performance.

    Science.gov (United States)

    D'Ausilio, Alessandro; Brunetti, Riccardo; Delogu, Franco; Santonico, Cristina; Belardinelli, Marta Olivetti

    2010-03-01

    Music performance is characterized by complex cross-modal interactions, offering a remarkable window into training-induced long-term plasticity and multimodal integration processes. Previous research with pianists has shown that playing a musical score is affected by the concurrent presentation of musical tones. We investigated the nature of this audio-motor coupling by evaluating how congruent and incongruent cross-modal auditory cues affect motor performance at different time intervals. We found facilitation if a congruent sound preceded motor planning with a large Stimulus Onset Asynchrony (SOA -300 and -200 ms), whereas we observed interference when an incongruent sound was presented with shorter SOAs (-200, -100 and 0 ms). Interference and facilitation, instead of developing through time as opposite effects of the same mechanism, showed dissociable time-courses suggesting their derivation from distinct processes. It seems that the motor preparation induced by the auditory cue has different consequences on motor performance according to the congruency with the future motor state the system is planning and the degree of asynchrony between the motor act and the sound presentation. The temporal dissociation we found contributes to the understanding of how perception meets action in the context of audio-motor integration.

  19. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes

    Science.gov (United States)

    Renvall, Hanna; Staeren, Noël; Barz, Claudia S.; Ley, Anke; Formisano, Elia

    2016-01-01

    This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in

  20. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    KAUST Repository

    Bayoumi, Maged Fouad

    2014-10-06

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.

  1. A neurophysiological deficit in early visual processing in schizophrenia patients with auditory hallucinations.

    Science.gov (United States)

    Kayser, Jürgen; Tenke, Craig E; Kroppmann, Christopher J; Alschuler, Daniel M; Fekri, Shiva; Gil, Roberto; Jarskog, L Fredrik; Harkavy-Friedman, Jill M; Bruder, Gerard E

    2012-09-01

    Existing 67-channel event-related potentials, obtained during recognition and working memory paradigms with words or faces, were used to examine early visual processing in schizophrenia patients prone to auditory hallucinations (AH, n = 26) or not (NH, n = 49) and healthy controls (HC, n = 46). Current source density (CSD) transforms revealed distinct, strongly left- (words) or right-lateralized (faces; N170) inferior-temporal N1 sinks (150 ms) in each group. N1 was quantified by temporal PCA of peak-adjusted CSDs. For words and faces in both paradigms, N1 was substantially reduced in AH compared with NH and HC, who did not differ from each other. The difference in N1 between AH and NH was not due to overall symptom severity or performance accuracy, with both groups showing comparable memory deficits. Our findings extend prior reports of reduced auditory N1 in AH, suggesting a broader early perceptual integration deficit that is not limited to the auditory modality.

  2. Rapid image-segmentation and perceptual transparency share a process which utilises X-junctions generated by temporal integration in the visual system.

    Science.gov (United States)

    Mitsudo, Hiroyuki

    2004-01-01

    Perceptual transparency requires local same-polarity X-junctions, which can also be generated by temporal integration under natural dynamic conditions. In this study, segmentation performance and target appearance were measured for a uniform gray target embedded in a random-dot frame presented with a temporally adjacent mask. Although static cues for both segmentation and transparency were unavailable, transparency was observed only when collinear same-polarity edges reduced backward masking, in both the fovea and the perifovea. These results suggest that the visual system has a common underlying mechanism for rapid segmentation and transparency, which utilises same-polarity X-junctions generated by temporal integration.

  3. Visual discrimination of delayed self-generated movement reveals the temporal limit of proprioceptive-visual intermodal integration.

    Science.gov (United States)

    Jaime, Mark; O'Driscoll, Kelly; Moore, Chris

    2016-07-01

    This study examined the intermodal integration of visual-proprioceptive feedback via a novel visual discrimination task of delayed self-generated movement. Participants performed a goal-oriented task in which visual feedback was available only via delayed videos displayed on two monitors-each with different delay durations. During task performance, delay duration was varied for one of the videos in the pair relative to a standard delay, which was held constant. Participants were required to identify and use the video with the lesser delay to perform the task. Visual discrimination of the lesser-delayed video was examined under four conditions in which the standard delay was increased for each condition. A temporal limit for proprioceptive-visual intermodal integration of 3-5s was revealed by subjects' inability to reliably discriminate video pairs.

  4. The mitochondrial connection in auditory neuropathy.

    Science.gov (United States)

    Cacace, Anthony T; Pinheiro, Joaquim M B

    2011-01-01

    'Auditory neuropathy' (AN), the term used to codify a primary degeneration of the auditory nerve, can be linked directly or indirectly to mitochondrial dysfunction. These observations are based on the expression of AN in known mitochondrial-based neurological diseases (Friedreich's ataxia, Mohr-Tranebjærg syndrome), in conditions where defects in axonal transport, protein trafficking, and fusion processes perturb and/or disrupt mitochondrial dynamics (Charcot-Marie-Tooth disease, autosomal dominant optic atrophy), in a common neonatal condition known to be toxic to mitochondria (hyperbilirubinemia), and where respiratory chain deficiencies produce reductions in oxidative phosphorylation that adversely affect peripheral auditory mechanisms. This body of evidence is solidified by data derived from temporal bone and genetic studies, biochemical, molecular biologic, behavioral, electroacoustic, and electrophysiological investigations.

  5. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  6. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    Science.gov (United States)

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  7. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  8. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  9. Expectation and attention in hierarchical auditory prediction.

    Science.gov (United States)

    Chennu, Srivas; Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M; Bekinschtein, Tristan A

    2013-07-03

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding.

  10. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  11. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  12. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  13. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  14. Noise Trauma Induced Plastic Changes in Brain Regions outside the Classical Auditory Pathway

    Science.gov (United States)

    Chen, Guang-Di; Sheppard, Adam; Salvi, Richard

    2017-01-01

    The effects of intense noise exposure on the classical auditory pathway have been extensively investigated; however, little is known about the effects of noise-induced hearing loss on non-classical auditory areas in the brain such as the lateral amygdala (LA) and striatum (Str). To address this issue, we compared the noise-induced changes in spontaneous and tone-evoked responses from multiunit clusters (MUC) in the LA and Str with those seen in auditory cortex (AC). High-frequency octave band noise (10–20 kHz) and narrow band noise (16–20 kHz) induced permanent thresho ld shifts (PTS) at high-frequencies within and above the noise band but not at low frequencies. While the noise trauma significantly elevated spontaneous discharge rate (SR) in the AC, SRs in the LA and Str were only slightly increased across all frequencies. The high-frequency noise trauma affected tone-evoked firing rates in frequency and time dependent manner and the changes appeared to be related to severity of noise trauma. In the LA, tone-evoked firing rates were reduced at the high-frequencies (trauma area) whereas firing rates were enhanced at the low-frequencies or at the edge-frequency dependent on severity of hearing loss at the high frequencies. The firing rate temporal profile changed from a broad plateau to one sharp, delayed peak. In the AC, tone-evoked firing rates were depressed at high frequencies and enhanced at the low frequencies while the firing rate temporal profiles became substantially broader. In contrast, firing rates in the Str were generally decreased and firing rate temporal profiles become more phasic and less prolonged. The altered firing rate and pattern at low frequencies induced by high frequency hearing loss could have perceptual consequences. The tone-evoked hyperactivity in low-frequency MUC could manifest as hyperacusis whereas the discharge pattern changes could affect temporal resolution and integration. PMID:26701290

  15. Integration of various data sources for transient groundwater modeling with spatio-temporally variable fluxes—Sardon study case, Spain

    Science.gov (United States)

    Lubczynski, Maciek W.; Gurwin, Jacek

    2005-05-01

    Spatio-temporal variability of recharge ( R) and groundwater evapotranspiration ( ETg) fluxes in a granite Sardon catchment in Spain (˜80 km 2) have been assessed based on integration of various data sources and methods within the numerical groundwater MODFLOW model. The data sources and methods included: remote sensing solution of surface energy balance using satellite data, sap flow measurements, chloride mass balance, automated monitoring of climate, depth to groundwater table and river discharges, 1D reservoir modeling, GIS modeling, field cartography and aerial photo interpretation, slug and pumping tests, resistivity, electromagnetic and magnetic resonance soundings. The presented study case provides not only detailed evaluation of the complexity of spatio-temporal variable fluxes, but also a complete and generic methodology of modern data acquisition and data integration in transient groundwater modeling for spatio-temporal groundwater balancing. The calibrated numerical model showed spatially variable patterns of R and ETg fluxes despite a uniform rainfall pattern. The seasonal variability of fluxes indicated: (1) R in the range of 0.3-0.5 mm/d within ˜8 months of the wet season with exceptional peaks as high as 0.9 mm/d in January and February and no recharge in July and August; (2) a year round stable lateral groundwater outflow ( Qg) in the range of 0.08-0.24 mm/d; (3) ETg=0.64, 0.80, 0.55 mm/d in the dry seasons of 1997, 1998, 1999, respectively, and <0.05 mm/d in wet seasons; (4) temporally variable aquifer storage, which gains water in wet seasons shortly after rain showers and looses water in dry seasons mainly due to groundwater evapotranspiration. The dry season sap flow measurements of tree transpiration performed in the homogenous stands of Quercus ilex and Quercus pyrenaica indicated flux rates of 0.40 and 0.15 mm/d, respectively. The dry season tree transpiration for the entire catchment was ˜0.16 mm/d. The availability of dry season

  16. From 3D to 4D: Integration of temporal information into CT angiography studies.

    Science.gov (United States)

    Haubenreisser, Holger; Bigdeli, Amir; Meyer, Mathias; Kremer, Thomas; Riester, Thomas; Kneser, Ulrich; Schoenberg, Stefan O; Henzler, Thomas

    2015-12-01

    CT angiography is the current clinical standard for the imaging many vascular illnesses. This is traditionally done with a single arterial contrast phase. However, advances in CT technology allow for a dynamic acquisition of the contrast bolus, thus adding temporal information to the examination. The aim of this article is to highlight the clinical possibilities of dynamic CTA using 2 examples. The accuracy of the detection and quantification of stenosis in patients with peripheral arterial occlusive disease, especially in stadium III and IV, is significantly improved when performing dynamic CTA examinations. The post-interventional follow-up of examinations of EVAR benefit from dynamic information, allowing for a higher sensitivity and specificity, as well as allowing more accurate classification of potential endoleaks. The described radiation dose for these dynamic examinations is low, but this can be further optimized by using lower tube voltages. There are a multitude of applications for dynamic CTA that need to be further explored in future studies.

  17. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  18. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  19. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  20. Integrating temporal and spatial scales: Human structural network motifs across age and region-of-interest size

    CERN Document Server

    Echtermeyer, Christoph; Rotarska-Jagiela, Anna; Mohr, Harald; Uhlhaas, Peter J; Kaiser, Marcus

    2011-01-01

    Human brain networks can be characterized at different temporal or spatial scales given by the age of the subject or the spatial resolution of the neuroimaging method. Integration of data across scales can only be successful if the combined networks show a similar architecture. One way to compare networks is to look at spatial features, based on fibre length, and topological features of individual nodes where outlier nodes form single node motifs whose frequency yields a fingerprint of the network. Here, we observe how characteristic single node motifs change over age (12-23 years) and network size (414, 813, and 1615 nodes) for diffusion tensor imaging (DTI) structural connectivity in healthy human subjects. First, we find the number and diversity of motifs in a network to be strongly correlated. Second, comparing different scales, the number and diversity of motifs varied across the temporal (subject age) and spatial (network resolution) scale: certain motifs might only occur at one spatial scale or for a c...

  1. Spatial and temporal movements in Pyrenean bearded vultures (Gypaetus barbatus): Integrating movement ecology into conservation practice

    Science.gov (United States)

    Margalida, Antoni; Pérez-García, Juan Manuel; Afonso, Ivan; Moreno-Opo, Rubén

    2016-10-01

    Understanding the movement of threatened species is important if we are to optimize management and conservation actions. Here, we describe the age and sex specific spatial and temporal ranging patterns of 19 bearded vultures Gypaetus barbatus tracked with GPS technology. Our findings suggest that spatial asymmetries are a consequence of breeding status and age-classes. Territorial individuals exploited home ranges of about 50 km2, while non-territorial birds used areas of around 10 000 km2 (with no seasonal differences). Mean daily movements differed between territorial (23.8 km) and non-territorial birds (46.1 km), and differences were also found between sexes in non-territorial birds. Daily maximum distances travelled per day also differed between territorial (8.2 km) and non-territorial individuals (26.5 km). Territorial females moved greater distances (12 km) than males (6.6 km). Taking into account high-use core areas (K20), Supplementary Feeding Sites (SFS) do not seem to play an important role in the use of space by bearded vultures. For non-territorial and territorial individuals, 54% and 46% of their home ranges (K90), respectively, were outside protected areas. Our findings will help develop guidelines for establishing priority areas based on spatial use, and also optimize management and conservation actions for this threatened species.

  2. Microstructural Integrity of Early- vs. Late-Myelinating White Matter Tracts in Medial Temporal Lobe Epilepsy

    Science.gov (United States)

    Lee, Chu-Yu; Tabesh, Ali; Benitez, Andreana; Helpern, Joseph A; Jensen, Jens H; Bonilha, Leonardo

    2013-01-01

    Purpose Patients with medial temporal lobe epilepsy (MTLE) exhibit structural brain damage involving gray (GM) and white matter (WM). The mechanisms underlying tissue loss in MTLE are unclear and may be associated with a combination of seizure excitotoxicity and WM vulnerability. The goal of this study was to investigate whether late-myelinating WM tracts are more vulnerable to injury in MTLE compared with early-myelinating tracts. Methods Diffusional kurtosis imaging scans were obtained from 25 patients with MTLE and from 36 matched healthy controls. Diffusion measures from regions of interest (ROIs) for both late- and early-myelinating WM tracts were analyzed. Regional Z-scores were computed with respect to normal controls to compare WM in early-myelinating tracts versus late-myelinating tracts. Key Findings We observed that late-myelinating tracts exhibited a larger decrease in mean, axial and radial kurtosis compared with early-myelinating tracts. We also observed that the change in radial kurtosis was more pronounced in late-myelinating tracts ipsilateral to the side of seizure onset. Significance These results suggest a developmentally based preferential susceptibility of late-myelinating WM tracts to damage in MTLE. Brain injury in epilepsy may be due to the pathological effects of seizures in combination with regional WM vulnerability. PMID:24032670

  3. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  4. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  5. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  6. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  7. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  8. 不同面孔表情时间整合的时程效应%Time Course Effects of Temporal Integration in Different Facial Expression Recognition

    Institute of Scientific and Technical Information of China (English)

    陈本友; 黄希庭

    2012-01-01

    时间整合是指相继呈现的序列事件被加工为一个整体表征,通过把完整的面孔表情分割成3部分,按照不同的时间间隔相继呈现,考察面孔表情时间整合的时程效果.结果发现:①时间整合的时程(SOA)要受到刺激材料的意义性的影响,面孔表情的时间整合的时程要长于面孔.②面孔表情的时间整合时程也存在类型差异,喜悦表情的时间整合时程要明显长于愤怒和悲伤2种表情.%Temporal integration refers to the process of perception processing, by which the successively separated stimuli are combined into a significant representation. In this study, the time course of temporal integration was examined in different facial expressions, by which each of three whole facial expression pictures was segmented into three parts, each including a salient facial feature: eye, nose or mouth; and then these parts were presented sequentially to the participants by different interval duration. The results showed inversion effects in that the time course of temporal integration (SOA) was influenced by stimulus feature, and the time course of temporal integration in facial expression was longer than in face, and in that a type difference was existed in the time course of temporal integration in facial expression, and the time course of temporal integration in happy expression was longer than that in angry or sad expression.

  9. Quantifying the spatio-temporal dynamics of woody plant encroachment using an integrative remote sensing, GIS, and spatial modeling approach

    Science.gov (United States)

    Buenemann, Michaela

    Despite a longstanding universal concern about and intensive research into woody plant encroachment (WPE)---the replacement of grasslands by shrub- and woodlands---our accumulated understanding of the process has either not been translated into sustainable rangeland management strategies or with only limited success. In order to increase our scientific insights into WPE, move us one step closer toward the sustainable management of rangelands affected by or vulnerable to the process, and identify needs for a future global research agenda, this dissertation presents an unprecedented critical, qualitative and quantitative assessment of the existing literature on the topic and evaluates the utility of an integrative remote sensing, GIS, and spatial modeling approach for quantifying the spatio-temporal dynamics of WPE. Findings from this research suggest that gaps in our current understanding of WPE and difficulties in devising sustainable rangeland management strategies are in part due to the complex spatio-temporal web of interactions between geoecological and anthropogenic variables involved in the process as well as limitations of presently available data and techniques. However, an in-depth analysis of the published literature also reveals that aforementioned problems are caused by two further crucial factors: the absence of information acquisition and reporting standards and the relative lack of long-term, large-scale, multi-disciplinary research efforts. The methodological framework proposed in this dissertation yields data that are easily standardized according to various criteria and facilitates the integration of spatially explicit data generated by a variety of studies. This framework may thus provide one common ground for scientists from a diversity of fields. Also, it has utility for both research and management. Specifically, this research demonstrates that the application of cutting-edge remote sensing techniques (Multiple Endmember Spectral Mixture

  10. Effect of auditory integration training on the cognitive function in elderly with mild cognitive impairment%听觉统合训练对轻度认知功能障碍老人认知能力的影响

    Institute of Scientific and Technical Information of China (English)

    毛晓红; 魏秀红

    2012-01-01

    目的 探讨听觉统合训练对轻度认知功能障碍(MCI)老人认知能力的影响.方法 将入选的60例60~75岁的MCI老人随机分成训练组30例和对照组30例.训练组每天上午9:00~9:30在课题组人员指导下进行训练,每周6d,每天0.5h,持续6个月;对照组不进行训练干预.6个月后用基本认知能力测验软件进行组内及组间对照,评价两组老人认知功能变化情况.结果 6个月后基本认知能力测验中数字快速拷贝、汉字快速比较、心算答案回忆方面训练组优于对照组(P<0.05),训练组干预后优于干预前(P<0.05).结论 听觉统合训练可改善MCI老人的认知功能.%Objective To evaluate the effect of auditory integration training on the cognitive function in elderly with mild cognitive impairment(MCI). Methods Sixty elderly aged 60 to 75 years old with MCI were randomly divided into the experimental group and the control group. The elderly in the experimental group received auditory integration training for half an hour every day for six months. The patients' cognitive function was assessed by Basic Cognitive Ability Test before and after training. Results Six months after training,the patients' performance on digital copy,Chinese word comparison,mental arithmetic answer memories in the basic cognitive ability test were significantly higher in the experimental group than those of the control group (P<0.05). Conclusions Auditory integration training can improve the cognitive function of older people with MCI.

  11. Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Travis, Katherine E; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2012-07-11

    Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.

  12. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  13. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    Science.gov (United States)

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".

  14. Assessing temporal uncertainties in integrated groundwater management: an opportunity for change?

    Science.gov (United States)

    Anglade, J. A.; Billen, G.; Garnier, J.

    2013-12-01

    Since the early 1990's, high levels of nitrates concentration (occasionally exceeding the European drinking standard of 50 mgNO3-/l) have been recorded in the borewells supplying Auxerres's 60.000 inhabitants water requirements. The water catchment area (86 km2) is located in a rural area dedicated to field crops production in intensive cereal farming systems based on massive inputs of synthetic fertilizers. In 1998, a co-management committee comprising Auxerre City, rural municipalities located in the water catchment area, consumers and farmers, was created as a forward-looking associative structure to achieve integrated, adaptive and sustainable management of the resource. In 2002, 18 years after the first signs of water quality degradation, multiparty negotiation led to a cooperative agreement, a contribution to assist farmers toward new practices (optimized application of fertilizers, catch crops, and buffer strips) in a form of a surcharge on consumers' water bills. The management strategy initially integrated and operating on a voluntary basis, did not rapidly deliver its promises (there was no significant decrease in the nitrates concentration). It evolved into a combination of short term palliative solutions, contractual and regulatory instruments with higher requirements. The establishment of a regulatory framework caused major tensions between stakeholders that brought about a feeling of discouragement and a lack of understanding as to the absence of results on water quality after 20 years of joint actions. At this point, the urban-rural solidarity was in danger in being undermined, so the time issue, i.e the delay between agricultural pressure changes and visible effects on water quality, was scientifically addressed and communicated to all the parties involved. First, water age dating analysis through CFC and SF6 (anthropic gas) coupled with a statistical long term analysis of agricultural evolutions revealed a residence time in the Sequanian limestones

  15. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  16. Bilateral Collicular Interaction: Modulation of Auditory Signal Processing in Amplitude Domain

    Science.gov (United States)

    Fu, Zi-Ying; Wang, Xin; Jen, Philip H.-S.; Chen, Qi-Cai

    2012-01-01

    In the ascending auditory pathway, the inferior colliculus (IC) receives and integrates excitatory and inhibitory inputs from many lower auditory nuclei, intrinsic projections within the IC, contralateral IC through the commissure of the IC and from the auditory cortex. All these connections make the IC a major center for subcortical temporal and spectral integration of auditory information. In this study, we examine bilateral collicular interaction in modulating amplitude-domain signal processing using electrophysiological recording, acoustic and focal electrical stimulation. Focal electrical stimulation of one (ipsilateral) IC produces widespread inhibition (61.6%) and focused facilitation (9.1%) of responses of neurons in the other (contralateral) IC, while 29.3% of the neurons were not affected. Bilateral collicular interaction produces a decrease in the response magnitude and an increase in the response latency of inhibited IC neurons but produces opposite effects on the response of facilitated IC neurons. These two groups of neurons are not separately located and are tonotopically organized within the IC. The modulation effect is most effective at low sound level and is dependent upon the interval between the acoustic and electric stimuli. The focal electrical stimulation of the ipsilateral IC compresses or expands the rate-level functions of contralateral IC neurons. The focal electrical stimulation also produces a shift in the minimum threshold and dynamic range of contralateral IC neurons for as long as 150 minutes. The degree of bilateral collicular interaction is dependent upon the difference in the best frequency between the electrically stimulated IC neurons and modulated IC neurons. These data suggest that bilateral collicular interaction mainly changes the ratio between excitation and inhibition during signal processing so as to sharpen the amplitude sensitivity of IC neurons. Bilateral interaction may be also involved in acoustic

  17. Allen Brain Atlas: an integrated spatio-temporal portal for exploring the central nervous system.

    Science.gov (United States)

    Sunkin, Susan M; Ng, Lydia; Lau, Chris; Dolbeare, Tim; Gilbert, Terri L; Thompson, Carol L; Hawrylycz, Michael; Dang, Chinh

    2013-01-01

    The Allen Brain Atlas (http://www.brain-map.org) provides a unique online public resource integrating extensive gene expression data, connectivity data and neuroanatomical information with powerful search and viewing tools for the adult and developing brain in mouse, human and non-human primate. Here, we review the resources available at the Allen Brain Atlas, describing each product and data type [such as in situ hybridization (ISH) and supporting histology, microarray, RNA sequencing, reference atlases, projection mapping and magnetic resonance imaging]. In addition, standardized and unique features in the web applications are described that enable users to search and mine the various data sets. Features include both simple and sophisticated methods for gene searches, colorimetric and fluorescent ISH image viewers, graphical displays of ISH, microarray and RNA sequencing data, Brain Explorer software for 3D navigation of anatomy and gene expression, and an interactive reference atlas viewer. In addition, cross data set searches enable users to query multiple Allen Brain Atlas data sets simultaneously. All of the Allen Brain Atlas resources can be accessed through the Allen Brain Atlas data portal.

  18. Integrated remote sensing for multi-temporal analysis of urban land cover-climate interactions

    Science.gov (United States)

    Savastru, Dan M.; Zoran, Maria A.; Savastru, Roxana S.

    2016-08-01

    Climate change is considered to be the biggest environmental threat in the future in the South- Eastern part of Europe. In frame of predicted global warming, urban climate is an important issue in scientific research. Surface energy processes have an essential role in urban weather, climate and hydrosphere cycles, as well in urban heat redistribution. This paper investigated the influences of urban growth on thermal environment in relationship with other biophysical variables in Bucharest metropolitan area of Romania. Remote sensing data from Landsat TM/ETM+ and time series MODIS Terra/Aqua sensors have been used to assess urban land cover- climate interactions over period between 2000 and 2015 years. Vegetation abundances and percent impervious surfaces were derived by means of linear spectral mixture model, and a method for effectively enhancing impervious surface has been developed to accurately examine the urban growth. The land surface temperature (Ts), a key parameter for urban thermal characteristics analysis, was also analyzed in relation with the Normalized Difference Vegetation Index (NDVI) at city level. Based on these parameters, the urban growth, and urban heat island effect (UHI) and the relationships of Ts to other biophysical parameters have been analyzed. The correlation analyses revealed that, at the pixel-scale, Ts possessed a strong positive correlation with percent impervious surfaces and negative correlation with vegetation abundances at the regional scale, respectively. This analysis provided an integrated research scheme and the findings can be very useful for urban ecosystem modeling.

  19. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  20. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  1. Audio-tactile integration and the influence of musical training.

    Directory of Open Access Journals (Sweden)

    Anja Kuchenbuch

    Full Text Available Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  2. Integrated snow and avalanche monitoring syatem for Indian Himalaya using multi-temporal satellite imagery and ancillary data

    Science.gov (United States)

    Sharma, S. S.; Mani, Sneh; Mathur, P.

    The variations in the local climate, environment and altitude as well as fast snow cover build up and rapid changes in snow characteristics with passage of winter are major contributing factors to make snow avalanches as one of the threatening problems in the North West Himalaya. For sustainable development of these mountainous areas, a number of multi-purpose projects are being planned. In recent times, the danger of natural and man-made hazards is increasing and the availability of water is fluctuating; and thus, making the project implementation difficult. To overcome these difficulties to a great extent, an integrated monitoring system is required for short term as well as long term assessment of snowcover variation and avalanche hazard. In order to monitor the spatial extent of snow cover, satellite data can be employed on an operational basis. Spectral settings as well as the temporal and spatial resolution make time series NOAA-AVHHR and MODIS sensor data well suited for operational snow cover monitoring at regional or continental scale; Indian Remote Sensing Satellite (IRS) LISS, WiFS and AWiFS sensor data suitable for studies at larger scale; and microwave data for extraction of snow wetness information.. In the present paper, an attempt is made to study the trends of changes in snow characteristics and related avalanche phenomenon using time series multi-temporal, multi-resolution satellite data with respect to different ranges in Western Himalaya, namely Pir Panjal range, Great Himalaya range, Zanskar range, Ladakh range and Great Karakoram range. The operational processing of these data included geocoding, calibration, terrain normalization, classification, statistical post classification and derivation of snow cover statistics. The calibration and normalization of imageries allowed the application of physically based classification thresholds possible for albedo, brightness temperature and the Normalized Difference Snow Index (NDSI) parameters

  3. Integrated snow and avalanche monitoring system for Indian Himalaya using multi-temporal satellite imagery and ancillary data

    Science.gov (United States)

    Sharma, S. S.; Mani, Sneh; Mathur, P.

    The variations in the local climate, environment and altitude as well as fast snow cover build up and rapid changes in snow characteristics with passage of winter are major contributing factors to make snow avalanches as one of the threatening problems in the North West Himalaya. For sustainable development of these mountainous areas, a number of multi-purpose projects are being planned. In recent times, the danger of natural and man-made hazards is increasing and the availability of water is fluctuating; and thus, making the project implementation difficult. To overcome these difficulties to a great extent, an integrated monitoring system is required for short term as well as long term assessment of snowcover variation and avalanche hazard. In order to monitor the spatial extent of snow cover, satellite data can be employed on an operational basis. Spectral settings as well as the temporal and spatial resolution make time series NOAA-AVHHR and MODIS sensor data well suited for operational snow cover monitoring at regional or continental scale; Indian Remote Sensing Satellite (IRS) LISS, WiFS and AWiFS sensor data suitable for studies at larger scale; and microwave data for extraction of snow wetness information.. In the present paper, an attempt is made to study the trends of changes in snow characteristics and related avalanche phenomenon using time series multi-temporal, multi-resolution satellite data with respect to different ranges in Western Himalaya, namely Pir Panjal range, Great Himalaya range, Zanskar range, Ladakh range and Great Karakoram range. The operational processing of these data included geocoding, calibration, terrain normalization, classification, statistical post classification and derivation of snow cover statistics. The calibration and normalization of imageries allowed the application of physically based classification thresholds possible for albedo, brightness temperature and the Normalized Difference Snow Index (NDSI) parameters

  4. The Temporal Dynamics of Feature Integration for Color, form, and Motion

    Directory of Open Access Journals (Sweden)

    KS Pilz

    2012-07-01

    Full Text Available When two similar visual stimuli are presented in rapid succession, only their fused image is perceived, without having conscious access to the single stimuli. Such feature fusion occurs both for color (eg, Efron1973 and form (eg, Scharnowski et al 2007. For verniers, the fusion process lasts for more than 400 ms, as has been shown using TMS (Scharnowski et al 2009. In three experiments, we used light masks to investigate the time course of feature fusion for color, form, and motion. In experiment one, two verniers were presented in rapid succession with opposite offset directions. Subjects had to indicate the offset direction of the vernier. In a second experiment, a red and a green disk were presented in rapid succession, and subjects had to indicate whether the fused, yellow disk appeared rather than red or green. In a third experiment, three frames of random dots were presented successively. The first two frames created a percept of apparent motion to the upper right; and the last two frames, one to the upper left or vice versa. Subjects had to indicate the direction of motion. All stimuli were presented foveally. In all three experiments, we first balanced performance so that neither the first nor the second stimulus dominated the fused percept. In a second step, a light mask was presented either before, during, or after stimulus presentation. Depending on presentation time, the light masks modulated the fusion process so that either the first or the second stimulus dominated the percept. Our results show that unconscious feature fusion lasts more than five times longer than the actual stimulus duration, which indicates that individual features are stored for a substantial amount of time before they are integrated.

  5. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    David eJenson

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  6. A Stem Cell-Seeded Nanofibrous Scaffold for Auditory Nerve Replacement

    Science.gov (United States)

    2013-10-01

    biopolymer scaffold within the internal auditory meatus (IAM) of the guinea pig. (A) The lateral wall of an intact guinea pig temporal bone is shown......Nanofibrous Scaffold for Auditory Nerve Replacement 5b. GRANT NUMBER W81XWH-12-1-0492 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Betty Diamond

  7. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  8. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  9. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  10. Postnatal development of temporal integration, spike timing and spike threshold regulation by a dendrotoxin-sensitive K⁺ current in rat CA1 hippocampal cells.

    Science.gov (United States)

    Giglio, Anna M; Storm, Johan F

    2014-01-01

    Spike timing and network synchronization are important for plasticity, development and maturation of brain circuits. Spike delays and timing can be strongly modulated by a low-threshold, slowly inactivating, voltage-gated potassium current called D-current (ID ). ID can delay the onset of spiking, cause temporal integration of multiple inputs, and regulate spike threshold and network synchrony. Recent data indicate that ID can also undergo activity-dependent, homeostatic regulation. Therefore, we have studied the postnatal development of ID -dependent mechanisms in CA1 pyramidal cells in hippocampal slices from young rats (P7-27), using somatic whole-cell recordings. At P21-27, these neurons showed long spike delays and pronounced temporal integration in response to a series of brief depolarizing current pulses or a single long pulse, whereas younger cells (P7-20) showed shorter discharge delays and weak temporal integration, although the spike threshold became increasingly negative with maturation. Application of α-dendrotoxin (α-DTX), which blocks ID , reduced the spiking latency and temporal integration most strongly in mature cells, while shifting the spike threshold most strongly in a depolarizing direction in these cells. Voltage-clamp analysis revealed an α-DTX-sensitive outward current (ID ) that increased in amplitude during development. In contrast to P21-23, ID in the youngest group (P7-9) showed smaller peri-threshold amplitude. This may explain why long discharge delays and robust temporal integration only appear later, 3 weeks postnatally. We conclude that ID properties and ID -dependent functions develop postnatally in rat CA1 pyramidal cells, and ID may modulate network activity and plasticity through its effects on synaptic integration, spike threshold, timing and synchrony.

  11. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA......, 1982, 416-425]. When a “classical” gammatone filterbank was applied within this spectrum-based model, the model largely underestimated human performance at high signal frequencies. However, this limitation could be resolved by employing an auditory filterbank with narrower filters. This novel...

  12. Validation of a station-prototype designed to integrate temporally soil N2O fluxes: IPNOA Station prototype.

    Science.gov (United States)

    Laville, Patricia; Volpi, Iride; Bosco, Simona; Virgili, Giorgio; Neri, Simone; Continanza, Davide; Bonari, Enrico

    2016-04-01

    Nitrous oxide (N2O) flux measurements from agricultural soil surface still accounts for the scientific community as major challenge. The evaluations of integrated soil N2O fluxes are difficult because these emissions are lower than for the other greenhouse gases sources (CO2, CH4). They are also sporadic, because highly dependent on few environmental conditions acting as limiting factors. Within a LIFE project (IPNOA: LIFE11 ENV/IT/00032) a station prototype was developed to integrate annually N2O and CO2 emissions using automatically chamber technique. Main challenge was to develop a device enough durable to be able of measuring in continuous way CO2 and N2O fluxes with sufficient sensitivity to allow make reliable assessments of soil GHG measurements with minimal technical field interventions. The IPNOA station prototype was developed by West System SRL and was set up during 2 years (2014 -2015) in an experimental maize field in Tuscan. The prototype involved six automatic chambers; the complete measurement cycle was of 2 hours. Each chamber was closing during 20 min and biogas accumulations were monitoring in line with IR spectrometers. Auxiliary's measurements including soil temperatures and water contents as weather data were also monitoring. All data were managed remotely with the same acquisition software installed in the prototype control unit. The operation of the prototype during the two cropping years allowed testing its major features: its ability to evaluate the temporal variation of N2O soil fluxes during a long period with weather conditions and agricultural managements and to prove the interest to have continuous measurements of fluxes. The temporal distribution of N2O fluxes indicated that emissions can be very large and discontinuous over short periods less ten days and that during about 70% of the time N2O fluxes were around detection limit of the instrumentation, evaluated to 2 ng N ha-1 day-1. N2O emission factor assessments were 1.9% in 2014

  13. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.

  14. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  15. Modality specific neural correlates of auditory and somatic hallucinations

    Science.gov (United States)

    Shergill, S; Cameron, L; Brammer, M; Williams, S; Murray, R; McGuire, P

    2001-01-01

    Somatic hallucinations occur in schizophrenia and other psychotic disorders, although auditory hallucinations are more common. Although the neural correlates of auditory hallucinations have been described in several neuroimaging studies, little is known of the pathophysiology of somatic hallucinations. Functional magnetic resonance imaging (fMRI) was used to compare the distribution of brain activity during somatic and auditory verbal hallucinations, occurring at different times in a 36 year old man with schizophrenia. Somatic hallucinations were associated with activation in the primary somatosensory and posterior parietal cortex, areas that normally mediate tactile perception. Auditory hallucinations were associated with activation in the middle and superior temporal cortex, areas involved in processing external speech. Hallucinations in a given modality seem to involve areas that normally process sensory information in that modality.

 PMID:11606687

  16. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  17. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit.

  18. Leaf δ15N as a temporal integrator of nitrogen-cycling processes at the Mojave Desert FACE experiment

    Science.gov (United States)

    Sonderegger, D.; Koyama, A.; Jin, V.; Billings, S. A.; Ogle, K.; Evans, R. D.

    2011-12-01

    Ecosystem response to elevated carbon dioxide (CO2) in arid environments is regulated primarily by water, which may interact with nitrogen availability. Leaf nitrogen isotope composition (δ15N) can serve as an important indicator of changes in nitrogen dynamics by integrating changes in plant physiology and ecosystem biogeochemical processes. Because of this temporal integration, careful modeling of the antecedent conditions is necessary for understanding the processes driving variation in leaf δ15N. We measured leaf δ15N of Larrea tridentata (creosotebush) over the 10-year lifetime of the Nevada Desert Free-Air CO2 Enrichment (FACE) experiment. Leaf δ15N exhibited two patterns. First, elevated atmospheric CO2 significantly increased Larrea leaf δ15N by approximately 2 to 3 % compared to plants exposed to ambient CO2 concentrations Second, plants in both CO2 treatments exhibited significant seasonal cycles in leaf δ15N, with higher values during the fall and winter seasons. We modeled leaf δ15N using a hierarchical Bayesian framework that incorporated soil moisture, temperature, and the Palmer Drought Severity Index (PDSI) covariates in addition to a CO2 treatment effect and plot random effects. Antecedent moisture effects were modeled by using a combination of the previous season's aggregated conditions and a smoothly varying weighted average of the months or weeks directly preceding the observation. The time lag between the driving antecedent condition and the observed change in leaf δ15N indicates a significant and unobserved process mechanism. Preliminary results suggest a CO2 treatment interaction with the lag effect, indicating a treatment effect on the latent process.

  19. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Bleichner, Martin G; Debener, Stefan

    2016-01-01

    Cochlear implant (CI) users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH) controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users' speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS). Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  20. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  1. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration.

  2. Integrating temporal difference methods and self-organizing neural networks for reinforcement learning with delayed evaluative feedback.

    Science.gov (United States)

    Tan, A H; Lu, N; Xiao, D

    2008-02-01

    This paper presents a neural architecture for learning category nodes encoding mappings across multimodal patterns involving sensory inputs, actions, and rewards. By integrating adaptive resonance theory (ART) and temporal difference (TD) methods, the proposed neural model, called TD fusion architecture for learning, cognition, and navigation (TD-FALCON), enables an autonomous agent to adapt and function in a dynamic environment with immediate as well as delayed evaluative feedback (reinforcement) signals. TD-FALCON learns the value functions of the state-action space estimated through on-policy and off-policy TD learning methods, specifically state-action-reward-state-action (SARSA) and Q-learning. The learned value functions are then used to determine the optimal actions based on an action selection policy. We have developed TD-FALCON systems using various TD learning strategies and compared their performance in terms of task completion, learning speed, as well as time and space efficiency. Experiments based on a minefield navigation task have shown that TD-FALCON systems are able to learn effectively with both immediate and delayed reinforcement and achieve a stable performance in a pace much faster than those of standard gradient-descent-based reinforcement learning systems.

  3. Auditory streaming by phase relations between components of harmonic complexes: a comparative study of human subjects and bird forebrain neurons.

    Science.gov (United States)

    Dolležal, Lena-Vanessa; Itatani, Naoya; Günther, Stefanie; Klump, Georg M

    2012-12-01

    Auditory streaming describes a percept in which a sequential series of sounds either is segregated into different streams or is integrated into one stream based on differences in their spectral or temporal characteristics. This phenomenon has been analyzed in human subjects (psychophysics) and European starlings (neurophysiology), presenting harmonic complex (HC) stimuli with different phase relations between their frequency components. Such stimuli allow evaluating streaming by temporal cues, as these stimuli only vary in the temporal waveform but have identical amplitude spectra. The present study applied the commonly used ABA- paradigm (van Noorden, 1975) and matched stimulus sets in psychophysics and neurophysiology to evaluate the effects of fundamental frequency (f₀), frequency range (f(LowCutoff)), tone duration (TD), and tone repetition time (TRT) on streaming by phase relations of the HC stimuli. By comparing the percept of humans with rate or temporal responses of avian forebrain neurons, a neuronal correlate of perceptual streaming of HC stimuli is described. The differences in the pattern of the neurons' spike rate responses provide for a better explanation for the percept observed in humans than the differences in the temporal responses (i.e., the representation of the periodicity in the timing of the action potentials). Especially for HC stimuli with a short 40-ms duration, the differences in the pattern of the neurons' temporal responses failed to represent the patterns of human perception, whereas the neurons' rate responses showed a good match. These results suggest that differential rate responses are a better predictor for auditory streaming by phase relations than temporal responses.

  4. Auditory aura in frontal opercular epilepsy: sounds from afar.

    Science.gov (United States)

    Thompson, Stephen A; Alexopoulos, Andreas; Bingaman, William; Gonzalez-Martinez, Jorge; Bulacio, Juan; Nair, Dileep; So, Norman K

    2015-06-01

    Auditory auras are typically considered to localize to the temporal neocortex. Herein, we present two cases of frontal operculum/perisylvian epilepsy with auditory auras. Following a non-invasive evaluation, including ictal SPECT and magnetoencephalography, implicating the frontal operculum, these cases were evaluated with invasive monitoring, using stereoelectroencephalography and subdural (plus depth) electrodes, respectively. Spontaneous and electrically-induced seizures showed an ictal onset involving the frontal operculum in both cases. A typical auditory aura was triggered by stimulation of the frontal operculum in one. Resection of the frontal operculum and subjacent insula rendered one case seizure- (and aura-) free. From a hodological (network) perspective, we discuss these findings with consideration of the perisylvian and insular network(s) interconnecting the frontal and temporal lobes, and revisit the non-invasive data, specifically that of ictal SPECT.

  5. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  6. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  7. The effect of spatial-temporal audiovisual disparities on saccades in a complex scene.

    Science.gov (United States)

    Van Wanrooij, Marc M; Bell, Andrew H; Munoz, Douglas P; Van Opstal, A John

    2009-09-01

    In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438-454, 2002). In those experiments both stimulus modalities belonged to the same object, and subjects were instructed to foveate that source, irrespective of modality. Under natural conditions, however, subjects have no prior knowledge as to whether visual and auditory events originated from the same, or from different objects in space and time. In the present experiments we included these possibilities by introducing various spatial and temporal disparities between the visual and auditory events within the AV-background. Subjects had to orient fast and accurately to the visual target, thereby ignoring the auditory distractor. We show that this task belies a dichotomy, as it was quite difficult to produce fast responses (al. in J Neurophysiol 88:438-454, 2002). In contrast, with increasing spatial disparity, integration gradually broke down, as the subjects' responses became bistable: saccades were directed either to the auditory (fast responses), or to the visual stimulus (late responses). Interestingly, also in this case responses were faster and more accurate than to the respective unisensory stimuli.

  8. Improving depiction of temporal bone anatomy with low-radiation dose CT by an integrated circuit detector in pediatric patients: a preliminary study.

    Science.gov (United States)

    He, Jingzhen; Zu, Yuliang; Wang, Qing; Ma, Xiangxing

    2014-12-01

    The purpose of this study was to determine the performance of low-dose computed tomography (CT) scanning with integrated circuit (IC) detector in defining fine structures of temporal bone in children by comparing with the conventional detector. The study was performed with the approval of our institutional review board and the patients' anonymity was maintained. A total of 86 children0.05). The low-dose CT images acquired with the IC detector provide better depiction of fine osseous structures of temporal bone than that with the conventional DC detector.

  9. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue;

    2006-01-01

    that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...... of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p...

  10. Measuring the dynamics of neural responses in primary auditory cortex

    CERN Document Server

    Depireux, D A; Shamma, S A; Depireux, Didier A.; Simon, Jonathan Z.; Shamma, Shihab A.

    1998-01-01

    We review recent developments in the measurement of the dynamics of the response properties of auditory cortical neurons to broadband sounds, which is closely related to the perception of timbre. The emphasis is on a method that characterizes the spectro-temporal properties of single neurons to dynamic, broadband sounds, akin to the drifting gratings used in vision. The method treats the spectral and temporal aspects of the response on an equal footing.

  11. Effects of auditory integration training on brainstem auditory evoked potential and symptoms in children with autism spectrum disorders%听觉统合干预对孤独症谱系障碍患儿脑干听觉诱发电位及临床疗效的影响

    Institute of Scientific and Technical Information of China (English)

    周洋; 陈一心; 高润; 王建军; 陈图农; 骆松; 黄懿钖

    2016-01-01

    Objective To investigate the curative effect of auditory integration training (AIT) on autism spectrum disorders (ASD) in children with abnormal brainstem auditory evoked potential (BAEP).Methods 56 cases of ASD patients with abnormal BAEP were treated with AIT (aged 2-6 years).BAEP will be reviewed after each course of treatments until the test results were back to normal or had no obvious changes.Children' s core symptoms were evaluated by using autism behavior checklist (ABC) and the childhood autism rating scale (CARS) before and after treatments.Results 56 children accepted (1.95±0.92) courses of AIT.Compared with the data before AIT,CARS scores ((36.32± 3.54),(34.11 ± 3.12)),scores of the sensory factor((5.65±4.61),(4.28±4.11)) and the stereotypes factor of ABC were decreased significantly(P<0.05).After treatments,BAEP of 29 cases (51.79%) went back to normal levels.The bilateral wave incubation periods on left side (Ⅰ:(1.81 ± 0.17) ms,(1.71 ± 0.12) ms,Ⅲ:(4.14 ± 0.18) ms,(4.07 ±0.17)ms,V:(6.09±0.23)ms,(5.97±0.22) ms)and right side (Ⅰ:(1.79±0.17) ms,(1.74±0.13) ms,Ⅲ:(4.15±0.16) ms,(4.07±0.16) ms,V:(6.06±0.23) ms,(5.99±0.26) ms) were significantly shortened (P< 0.05).Conclusion AIT can improve the functional handicap of auditory pathway in brainstem of ASD children,and the core symptoms of ASD.%目的 探究听觉统合训练(auditory integration training,AIT)对存在脑干听觉传导通路功能障碍的孤独症谱系障碍(autism spectrum disorders,ASD)患儿的疗效.方法 56例脑干听觉诱发电位(brainstem auditory evoked potential,BAEP)异常的ASD患儿(年龄2~6岁)接受AIT,每疗程结束后复查BAEP,直至检测结果恢复正常或不再有明显变化.分别在治疗前后对患儿进行孤独症行为评定量表(ABC)及儿童孤独症评定量表(CARS)评估.结果 56例患儿平均接受AIT(1.95±0.92)个疗程.与治疗前比较,治疗后56例患儿的CARS评分下降[(36.32±3.54)分,(34.11±3.12)

  12. 整合视听连续执行测试与智商关系初探%Preliminary discussion of the relationship between integrated visual and auditory continuous performance test and intelligence quotient

    Institute of Scientific and Technical Information of China (English)

    冯冰; 杨良政; 周海鹰; 陈红

    2011-01-01

    目的:探讨整合视听连续执行测试各商数与智商间关系,寻找IVA - CPT测试的辅助诊断价值.方法:将IVA-CPT测试各商数与智商做直线相关分析.结果:智商与综合注意商数、反应控制商数、理解商数、视觉感觉统合商数呈正相关;ADHD注意缺陷型的智商较低.结论:IVA - CPT测试各商数与智商密切相关,ADHD注意缺陷型对智力影响较大.%Objective; To explore the relationship between integrated visual and auditory continuous performance test (IVA-CPT) and intelligence quotient, find out the value of IVA-CPT for assisted diagnosis. Methods; A linear correlation analysis was conduc-ted on the quotients of IVA - CPT and intelligence quotient. Results; There was a positive correlation between intelligence quotient and com-prehensive attention quotient, response control quotient, understanding quotient, visual sensory integration quotient; the intelligence quo-tients of children with attention deficit hyperactivity disorder (ADHD) of attention deficit type were relatively. Conclusion; The quotients of IVA - CPT are related to intelligence quotient closely, the impact of ADHD of attention deficit type on intelligence quotient is large.

  13. Altered hippocampal myelinated fiber integrity in a lithium-pilocarpine model of temporal lobe epilepsy: a histopathological and stereological investigation.

    Science.gov (United States)

    Ye, Yuanzhen; Xiong, Jiajia; Hu, Jun; Kong, Min; Cheng, Li; Chen, Hengsheng; Li, Tingsong; Jiang, Li

    2013-07-19

    The damage of white matter, primarily myelinated fibers, in the central nervous system (CNS) of temporal lobe epilepsy (TLE) patients has been recently reported. However, limited data exist addressing the types of changes that occur to myelinated fibers inside the hippocampus as a result of TLE. The current study was designed to examine this issue in a lithium-pilocarpine rat model. Investigated by electroencephalography (EEG), Gallyas silver staining, immunohistochemistry, western blotting, transmission electron microscopy, and stereological methods, the results showed that hippocampal myelinated fibers of the epilepsy group were degenerated with significantly less myelin basic protein (MBP) expression relative to those of control group rats. Stereological analysis revealed that the total volumes of hippocampal formation, myelinated fibers, and myelin sheaths in the hippocampus of epilepsy group rats were decreased by 20.43%, 49.16%, and 52.60%, respectively. In addition, epilepsy group rats showed significantly greater mean diameters of myelinated fibers and axons, whereas the mean thickness of myelin sheaths was less, especially for small axons with diameters from 0.1 to 0.8µm, compared to control group rats. Finally, the total length of the myelinated fibers in the hippocampus of epilepsy group rats was significantly decreased by 56.92%, compared to that of the control group, with the decreased length most prominent for myelinated fibers with diameters from 0.4 to 0.8µm. This study is the first to provide experimental evidence that the integrity of hippocampal myelinated fibers is negatively affected by inducing epileptic seizures with pilocarpine, which may contribute to the abnormal propagation of epileptic discharge.

  14. Integrated community profiling indicates long-term temporal stability of the predominant faecal microbiota in captive cheetahs.

    Science.gov (United States)

    Becker, Anne A M J; Janssens, Geert P J; Snauwaert, Cindy; Hesta, Myriam; Huys, Geert

    2015-01-01

    Understanding the symbiotic relationship between gut microbes and their animal host requires characterization of the core microbiota across populations and in time. Especially in captive populations of endangered wildlife species such as the cheetah (Acinonyx jubatus), this knowledge is a key element to enhance feeding strategies and reduce gastrointestinal disorders. In order to investigate the temporal stability of the intestinal microbiota in cheetahs under human care, we conducted a longitudinal study over a 3-year period with bimonthly faecal sampling of 5 cheetahs housed in two European zoos. For this purpose, an integrated 16S rRNA DGGE-clone library approach was used in combination with a series of real-time PCR assays. Our findings disclosed a stable faecal microbiota, beyond intestinal community variations that were detected between zoo sample sets or between animals. The core of this microbiota was dominated by members of Clostridium clusters I, XI and XIVa, with mean concentrations ranging from 7.5-9.2 log10 CFU/g faeces and with significant positive correlations between these clusters (Pcheetahs. The fifth animal in the study suffered from intermediate episodes of vomiting and diarrhea during the monitoring period and exhibited remarkably more change (39.4%). This observation may reflect the temporary impact of perturbations such as the animal's compromised health, antibiotic administration or a combination thereof, which temporarily altered the relative proportions of Clostridium clusters I and XIVa. In conclusion, this first long-term monitoring study of the faecal microbiota in feline strict carnivores not only reveals a remarkable compositional stability of this ecosystem, but also shows a qualitative and quantitative similarity in a defined set of faecal bacterial lineages across the five animals under study that may typify the core phylogenetic microbiome of cheetahs.

  15. Integrated community profiling indicates long-term temporal stability of the predominant faecal microbiota in captive cheetahs.

    Directory of Open Access Journals (Sweden)

    Anne A M J Becker

    Full Text Available Understanding the symbiotic relationship between gut microbes and their animal host requires characterization of the core microbiota across populations and in time. Especially in captive populations of endangered wildlife species such as the cheetah (Acinonyx jubatus, this knowledge is a key element to enhance feeding strategies and reduce gastrointestinal disorders. In order to investigate the temporal stability of the intestinal microbiota in cheetahs under human care, we conducted a longitudinal study over a 3-year period with bimonthly faecal sampling of 5 cheetahs housed in two European zoos. For this purpose, an integrated 16S rRNA DGGE-clone library approach was used in combination with a series of real-time PCR assays. Our findings disclosed a stable faecal microbiota, beyond intestinal community variations that were detected between zoo sample sets or between animals. The core of this microbiota was dominated by members of Clostridium clusters I, XI and XIVa, with mean concentrations ranging from 7.5-9.2 log10 CFU/g faeces and with significant positive correlations between these clusters (P<0.05, and by Lactobacillaceae. Moving window analysis of DGGE profiles revealed 23.3-25.6% change between consecutive samples for four of the cheetahs. The fifth animal in the study suffered from intermediate episodes of vomiting and diarrhea during the monitoring period and exhibited remarkably more change (39.4%. This observation may reflect the temporary impact of perturbations such as the animal's compromised health, antibiotic administration or a combination thereof, which temporarily altered the relative proportions of Clostridium clusters I and XIVa. In conclusion, this first long-term monitoring study of the faecal microbiota in feline strict carnivores not only reveals a remarkable compositional stability of this ecosystem, but also shows a qualitative and quantitative similarity in a defined set of faecal bacterial lineages across the five

  16. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  17. 听觉统合治疗孤独症儿童20例疗效分析%Analysis of the treatment effect of auditory integrative training on 20 autistic children

    Institute of Scientific and Technical Information of China (English)

    张朝; 于宗富; 黄晓玲; 王玲; 方俊明

    2011-01-01

    Objective: To explore the treatment effect of auditory integrative training (AIT) on autistic children and provide them with clinical support for rehabilitative treatment Methods: 20 cases of 2 ~ 4 years old autistic children were treated with auditory integrative training (AIT).The patients were investigated with Autism Behavior Checklist (ABC), Wechsler Intelligence Scale (WPPSI) and Gesell Development Scale and AIT Effect Scale.The effect was evaluated through the changes of clinical manifestations and the scores of ABC and IQ.Results: The scores of ABC dropped significantly ( P < 0.05 ) and the scores of IQ changed not significantly ( P > 0.05 ) six months later.The group had improved rapidly in many aspects such as the disorders of their language, social interactions, emotion disorders, sleeping,attention and sport skills.The group had not changed in their abnormal behaviors such as eye contact, stereotyped behavior and self -care.Conclusion: The auditory integrative training is positive to the autistic children in its short- term treatment effect.%目的:探讨听觉统合治疗对儿童孤独症的疗效,为孤独症儿童的康复治疗提供临床依据.方法:对20例2~4岁孤独症患儿采用数码听觉统合训练仪进行听觉统合治疗,疗效评估主要采用Gesell发育量表、韦氏幼儿智力测验(WPPSI)、孤独症行为量表(ABC)、AIT疗效调查表等方法,对患儿进行治疗前后ABC分值,IQ分值、临床症状比较.结果:治疗6个月后,患儿ABC分值显著降低(P<0.05);但IQ分值变化不显著(P>0.05);在语言障碍、社会交往障碍、情绪障碍、睡眠、注意力、运动技能临床症状上有较大的改善;在目光对视、刻板重复、生活自理方面改变不大.结论:听觉统合治疗对改善2~4岁孤独症儿童临床症状的近期治疗效果是肯定的,而且起效范围广.各级医疗保健部门、家长、幼儿园应加强对儿童孤独症的认识,及早发现患儿,及早治疗.

  18. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  19. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  20. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  1. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.

  2. 听觉统合治疗自闭症儿童的单一被试研究%Single case study on a child with autism treated by auditory integration training

    Institute of Scientific and Technical Information of China (English)

    张朝; 方俊明

    2012-01-01

    Objective: To explore the short-term therapeutic effect of auditory integration training (AIT) on a child with autism, provide a clinical basis for rehabilitation therapy of children with autism. Methods: A single ease study was conducted, one child aged three years and seven months were treated with AH. Results; The times of active language and active communication of the child increased obviously , while the duration time of cry and scream, the times of stereotyped behavior decreased sharply. AIT had a delay effect. Judging by the development tendency of regression line, compared with the first baseline period, the changes during treatment period and the second baseline period were large, showing a further improving trend in the direction of development. Conclusion; AIT has short - term therapeutic effect for treatment of the autistic child.%目的:探讨听觉统合治疗(auditory integration training,AIT)对自闭症儿童的近期疗效,为目闭症儿童的康复治疗提供临床依据.方法:采用4个单基线A-B-A单一被试实验设计,对1名3岁7个月自闭症儿童进行听觉统合治疗.结果:该患儿的主动语言次数、主动交往次数显著增加,哭闹持续时间、刻板行为的次数显著减少,且干预治疗有延时效应.从回归线的发展趋势来看,处理期和第二基线期较第一基线期的发展有较大的改变,在发展方向上有进一步改善症状的趋势.结论:AIT对该自闭症患儿的近期治疗有效.

  3. Auditory function in individuals within Leber's hereditary optic neuropathy pedigrees.

    Science.gov (United States)

    Rance, Gary; Kearns, Lisa S; Tan, Johanna; Gravina, Anthony; Rosenfeld, Lisa; Henley, Lauren; Carew, Peter; Graydon, Kelley; O'Hare, Fleur; Mackey, David A

    2012-03-01

    The aims of this study are to investigate whether auditory dysfunction is part of the spectrum of neurological abnormalities associated with Leber's hereditary optic neuropathy (LHON) and to determine the perceptual consequences of auditory neuropathy (AN) in affected listeners. Forty-eight subjects confirmed by genetic testing as having one of four mitochondrial mutations associated with LHON (mt11778, mtDNA14484, mtDNA14482 and mtDNA3460) participated. Thirty-two of these had lost vision, and 16 were asymptomatic at the point of data collection. While the majority of individuals showed normal sound detection, >25% (of both symptomatic and asymptomatic participants) showed electrophysiological evidence of AN with either absent or severely delayed auditory brainstem potentials. Abnormalities were observed for each of the mutations, but subjects with the mtDNA11778 type were the most affected. Auditory perception was also abnormal in both symptomatic and asymptomatic subjects, with >20% of cases showing impaired detection of auditory temporal (timing) cues and >30% showing abnormal speech perception both in quiet and in the presence of background noise. The findings of this study indicate that a relatively high proportion of individuals with the LHON genetic profile may suffer functional hearing difficulties due to neural abnormality in the central auditory pathways.

  4. Early visual deprivation severely compromises the auditory sense of space in congenitally blind children.

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-06-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record

  5. 不同时距条件下面孔表情知觉的时间整合效应%Temporal Integration Effects in Facial Expression Recognition in Different Temporal Duration Condition

    Institute of Scientific and Technical Information of China (English)

    陈本友; 黄希庭

    2012-01-01

    通过把面孔表情分割成三部分,按照不同的时间间隔以及不同的呈现时间相继呈现,考察了被试对面孔表情的时间整合效果,以此探讨时间整合的加工过程和影响因素。结果发现:(1)面孔表情的时间整合效果受时间结构和刺激材料的影响。(2)分离呈现的面孔表情能否进行时间整合与S0A的大小有关。(3)面孔表情的时间整合存在类型差异。(4)面孔表情的时间整合是在一个有限的视觉缓冲器内进行的,图像记忆和长时记忆与面孔表情的时间整合过程关系密切。%Temporal integration is the process of perception processing,in which the successively separated stimuli are combined into a significant representation.It is a complicated process,which is known to be influenced by multiple factors,such as the temporal structure and stimulus components.Although this process has been explored in inter-stimulus interval in face perception,little is known about the temporal integration effect in facial expression recognition.More importantly,there has been no relevant evidence demonstrating that stimulus duration and stimulus category can affect the temporal integration of facial expression. In the present study,the part-whole judgment task was used to examine the influencing factors of temporal integration in the facial expression.In two experiments,each of three whole facial expression pictures was segmented into three parts,and each including a salient facial feature:eye,nose,or mouth.These parts were presented sequentially to the participants by some interval or presentation durations, with a fixed sequence:eye part first,nose followed,and mouth last.Following the last part,a mask,which eliminated effects of afterimages or other types of visual persistence was displayed.Then,participants were asked to judge the category of the facial expression, by pressing one of three number keys;"1","2" and "3",corresponding to anger,happy and

  6. Translation and adaptation of functional auditory performance indicators (FAPI

    Directory of Open Access Journals (Sweden)

    Karina Ferreira

    2011-12-01

    Full Text Available Work with deaf children has gained new attention since the expectation and goal of therapy has expanded to language development and subsequent language learning. Many clinical tests were developed for evaluation of speech sound perception in young children in response to the need for accurate assessment of hearing skills that developed from the use of individual hearing aids or cochlear implants. These tests also allow the evaluation of the rehabilitation program. However, few of these tests are available in Portuguese. Evaluation with the Functional Auditory Performance Indicators (FAPI generates a child's functional auditory skills profile, which lists auditory skills in an integrated and hierarchical order. It has seven hierarchical categories, including sound awareness, meaningful sound, auditory feedback, sound source localizing, auditory discrimination, short-term auditory memory, and linguistic auditory processing. FAPI evaluation allows the therapist to map the child's hearing profile performance, determine the target for increasing the hearing abilities, and develop an effective therapeutic plan. Objective: Since the FAPI is an American test, the inventory was adapted for application in the Brazilian population. Material and Methods: The translation was done following the steps of translation and back translation, and reproducibility was evaluated. Four translated versions (two originals and two back-translated were compared, and revisions were done to ensure language adaptation and grammatical and idiomatic equivalence. Results: The inventory was duly translated and adapted. Conclusion: Further studies about the application of the translated FAPI are necessary to make the test practicable in Brazilian clinical use.

  7. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000....... PATIENTS: Ten women and 10 men with previously untreated primary cancer. Median age at diagnosis was 67 years (range, 31-87 years). Survival data included 18 patients with at least 2 years of follow-up or recurrence. INTERVENTION: Local canal resection or partial temporal bone resection. MAIN OUTCOME...

  8. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM.

    Science.gov (United States)

    Irvine, Dexter R F; Fallon, James B; Kamke, Marc R

    2006-04-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period.

  9. PLASTICITY IN THE ADULT CENTRAL AUDITORY SYSTEM

    Science.gov (United States)

    Irvine, Dexter R. F.; Fallon, James B.; Kamke, Marc R.

    2007-01-01

    The central auditory system retains into adulthood a remarkable capacity for plastic changes in the response characteristics of single neurons and the functional organization of groups of neurons. The most dramatic examples of this plasticity are provided by changes in frequency selectivity and organization as a consequence of either partial hearing loss or procedures that alter the significance of particular frequencies for the organism. Changes in temporal resolution are also seen as a consequence of altered experience. These forms of plasticity are likely to contribute to the improvements exhibited by cochlear implant users in the post-implantation period. PMID:17572797

  10. Neural Response Properties of Primary, Rostral, and Rostrotemporal Core Fields in the Auditory Cortex of Marmoset Monkeys

    OpenAIRE

    Bendor, Daniel; WANG, Xiaoqin

    2008-01-01

    The core region of primate auditory cortex contains a primary and two primary-like fields (AI, primary auditory cortex; R, rostral field; RT, rostrotemporal field). Although it is reasonable to assume that multiple core fields provide an advantage for auditory processing over a single primary field, the differential roles these fields play and whether they form a functional pathway collectively such as for the processing of spectral or temporal information are unknown. In this report we compa...

  11. Observation of propagating femtosecond light pulse train generated by an integrated array illuminator as a spatially and temporally continuous motion picture.

    Science.gov (United States)

    Yamagiwa, Masatomo; Komatsu, Aya; Awatsuji, Yasuhiro; Kubota, Toshihiro

    2005-05-02

    We observed a propagating femtosecond light pulse train generated by an integrated array illuminator as a spatially and temporally continuous motion picture. To observe the light pulse train propagating in air, light-in-flight holography is applied. The integrated array illuminator is an optical device for generating an ultrashort light pulse train from a single ultrashort pulse. The experimentally obtained pulse width and pulse interval were 130 fs and 19.7 ps, respectively. A back-propagating femtosecond light pulse train, which is the -2 order diffracted light pulse from the array illuminator and which is difficult to observe using conventional methods, was observed.

  12. Areas of cat auditory cortex as defined by neurofilament proteins expressing SMI-32.

    Science.gov (United States)

    Mellott, Jeffrey G; Van der Gucht, Estel; Lee, Charles C; Carrasco, Andres; Winer, Jeffery A; Lomber, Stephen G

    2010-08-01

    The monoclonal antibody SMI-32 was used to characterize and distinguish individual areas of cat auditory cortex. SMI-32 labels non-phosphorylated epitopes on the high- and medium-molecular weight subunits of neurofilament proteins in cortical pyramidal cells and dendritic trees with the most robust immunoreactivity in layers III and V. Auditory areas with unique patterns of immunoreactivity included: primary auditory cortex (AI), second auditory cortex (AII), dorsal zone (DZ), posterior auditory field (PAF), ventral posterior auditory field (VPAF), ventral auditory field (VAF), temporal cortex (T), insular cortex (IN), anterior auditory field (AAF), and the auditory field of the anterior ectosylvian sulcus (fAES). Unique patterns of labeling intensity, soma shape, soma size, layers of immunoreactivity, laminar distribution of dendritic arbors, and labeled cell density were identified. Features that were consistent in all areas included: layers I and IV neurons are immunonegative; nearly all immunoreactive cells are pyramidal; and immunoreactive neurons are always present in layer V. To quantify the results, the numbers of labeled cells and dendrites, as well as cell diameter, were collected and used as tools for identifying and differentiating areas. Quantification of the labeling patterns also established profiles for ten auditory areas/layers and their degree of immunoreactivity. Areal borders delineated by SMI-32 were highly correlated with tonotopically-defined areal boundaries. Overall, SMI-32 immunoreactivity can delineate ten areas of cat auditory cortex and demarcate topographic borders. The ability to distinguish auditory areas with SMI-32 is valuable for the identification of auditory cerebral areas in electrophysiological, anatomical, and/or behavioral investigations.

  13. Glutamate-bound NMDARs arising from in vivo-like network activity extend spatio-temporal integration in a L5 cortical pyramidal cell model.

    Directory of Open Access Journals (Sweden)

    Matteo Farinella

    2014-04-01

    Full Text Available In vivo, cortical pyramidal cells are bombarded by asynchronous synaptic input arising from ongoing network activity. However, little is known about how such 'background' synaptic input interacts with nonlinear dendritic mechanisms. We have modified an existing model of a layer 5 (L5 pyramidal cell to explore how dendritic integration in the apical dendritic tuft could be altered by the levels of network activity observed in vivo. Here we show that asynchronous background excitatory input increases neuronal gain and extends both temporal and spatial integration of stimulus-evoked synaptic input onto the dendritic tuft. Addition of fast and slow inhibitory synaptic conductances, with properties similar to those from dendritic targeting interneurons, that provided a 'balanced' background configuration, partially counteracted these effects, suggesting that inhibition can tune spatio-temporal integration in the tuft. Excitatory background input lowered the threshold for NMDA receptor-mediated dendritic spikes, extended their duration and increased the probability of additional regenerative events occurring in neighbouring branches. These effects were also observed in a passive model where all the non-synaptic voltage-gated conductances were removed. Our results show that glutamate-bound NMDA receptors arising from ongoing network activity can provide a powerful spatially distributed nonlinear dendritic conductance. This may enable L5 pyramidal cells to change their integrative properties as a function of local network activity, potentially allowing both clustered and spatially distributed synaptic inputs to be integrated over extended timescales.

  14. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  15. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia.

    Science.gov (United States)

    Kuga, Hironori; Onitsuka, Toshiaki; Hirano, Yoji; Nakamura, Itta; Oribe, Naoya; Mizuhara, Hiroaki; Kanai, Ryota; Kanba, Shigenobu; Ueno, Takefumi

    2016-10-01

    Recent MRI studies have shown that schizophrenia is characterized by reductions in brain gray matter, which progress in the acute state of the disease. Cortical circuitry abnormalities in gamma oscillations, such as deficits in the auditory steady state response (ASSR) to gamma frequency (>30-Hz) stimulation, have also been reported in schizophrenia patients. In the current study, we investigated neural responses during click stimulation by BOLD signals. We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ), 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ), and 24 healthy controls (HC), assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  16. Neural latencies do not explain the auditory and audio-visual flash-lag effect.

    Science.gov (United States)

    Arrighi, Roberto; Alais, David; Burr, David

    2005-11-01

    A brief flash presented physically aligned with a moving stimulus is perceived to lag behind, a well studied phenomenon termed the Flash-Lag Effect (FLE). It has been recently shown that the FLE also occurs in audition, as well as cross-modally between vision and audition. The present study has two goals: to investigate the acoustic and cross-modal FLE using a random motion technique; and to investigate whether neural latencies may account for the FLE in general. The random motion technique revealed a strong cross-modal FLE for visual motion stimuli and auditory probes, but not for the other conditions. Visual and auditory latencies for stimulus appearance and for motion were measured with three techniques: integration, temporal alignment and reaction times. All three techniques showed that a brief static acoustic stimulus is perceived more rapidly than a brief static visual stimulus, while a sound source in motion is perceived more slowly than a comparable visual stimulus. While the results of these three techniques agreed closely with each other, they were exactly opposite that required to account for the FLE by neural latencies. We conclude that neural latencies do not, in general, explain the flash-lag effect. Rather, our data suggest that neural integration times are more important.

  17. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  18. Thresholds of auditory-motor coupling measured with a simple task in musicians and non-musicians: was the sound simultaneous to the key press?

    Directory of Open Access Journals (Sweden)

    Floris T van Vugt

    Full Text Available The human brain is able to predict the sensory effects of its actions. But how precise are these predictions? The present research proposes a tool to measure thresholds between a simple action (keystroke and a resulting sound. On each trial, participants were required to press a key. Upon each keystroke, a woodblock sound was presented. In some trials, the sound came immediately with the downward keystroke; at other times, it was delayed by a varying amount of time. Participants were asked to verbally report whether the sound came immediately or was delayed. Participants' delay detection thresholds (in msec were measured with a staircase-like procedure. We hypothesised that musicians would have a lower threshold than non-musicians. Comparing pianists and brass players, we furthermore hypothesised that, as a result of a sharper attack of the timbre of their instrument, pianists might have lower thresholds than brass players. Our results show that non-musicians exhibited higher thresholds for delay detection (180 ± 104 ms than the two groups of musicians (102 ±65 ms, but there were no differences between pianists and brass players. The variance in delay detection thresholds could be explained by variance i n sensorimotor synchronisation capacities as well as variance in a purely auditory temporal irregularity detection measure. This suggests that the brain's capacity to generate temporal predictions of sensory consequences can be decomposed into general temporal prediction capacities together with auditory-motor coupling. These findings indicate that the brain has a relatively large window of integration within which an action and its resulting effect are judged as simultaneous. Furthermore, musical expertise may narrow this window down, potentially due to a more refined temporal prediction. This novel paradigm provides a simple test to estimate the temporal precision of auditory-motor action-effect coupling, and the paradigm can readily be

  19. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  20. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  1. How modality specific is processing of auditory and visual rhythms?

    Science.gov (United States)

    Pasinski, Amanda C; McAuley, J Devin; Snyder, Joel S

    2016-02-01

    The present study used ERPs to test the extent to which temporal processing is modality specific or modality general. Participants were presented with auditory and visual temporal patterns that consisted of initial two- or three-event beginning patterns. This delineated a constant standard time interval, followed by a two-event ending pattern delineating a variable test interval. Participants judged whether they perceived the pattern as a whole to be speeding up or slowing down. The contingent negative variation (CNV), a negative potential reflecting temporal expectancy, showed a larger amplitude for the auditory modality compared to the visual modality but a high degree of similarity in scalp voltage patterns across modalities, suggesting that the CNV arises from modality-general processes. A late, memory-dependent positive component (P3) also showed similar patterns across modalities.

  2. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  3. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  4. Cholecystokinin from the entorhinal cortex enables neural plasticity in the auditory cortex.

    Science.gov (United States)

    Li, Xiao; Yu, Kai; Zhang, Zicong; Sun, Wenjian; Yang, Zhou; Feng, Jingyu; Chen, Xi; Liu, Chun-Hua; Wang, Haitao; Guo, Yi Ping; He, Jufang

    2014-03-01

    Patients with damage to the medial temporal lobe show deficits in forming new declarative memories but can still recall older memories, suggesting that the medial temporal lobe is necessary for encoding memories in the neocortex. Here, we found that cortical projection neurons in the perirhinal and entorhinal cortices were mostly immunopositive for cholecystokinin (CCK). Local infusion of CCK in the auditory cortex of anesthetized rats induced plastic changes that enabled cortical neurons to potentiate their responses or to start responding to an auditory stimulus that was paired with a tone that robustly triggered action potentials. CCK infusion also enabled auditory neurons to start responding to a light stimulus that was paired with a noise burst. In vivo intracellular recordings in the auditory cortex showed that synaptic strength was potentiated after two pairings of presynaptic and postsynaptic activity in the presence of CCK. Infusion of a CCKB antagonist in the auditory cortex prevented the formation of a visuo-auditory association in awake rats. Finally, activation of the entorhinal cortex potentiated neuronal responses in the auditory cortex, which was suppressed by infusion of a CCKB antagonist. Together, these findings suggest that the medial temporal lobe influences neocortical plasticity via CCK-positive cortical projection neurons in the entorhinal cortex.

  5. You can't stop the music: reduced auditory alpha power and coupling between auditory and memory regions facilitate the illusory perception of music during noise.

    Science.gov (United States)

    Müller, Nadia; Keil, Julian; Obleser, Jonas; Schulz, Hannah; Grunwald, Thomas; Bernays, René-Ludwig; Huppertz, Hans-Jürgen; Weisz, Nathan

    2013-10-01

    Our brain has the capacity of providing an experience of hearing even in the absence of auditory stimulation. This can be seen as illusory conscious perception. While increasing evidence postulates that conscious perception requires specific brain states that systematically relate to specific patterns of oscillatory activity, the relationship between auditory illusions and oscillatory activity remains mostly unexplained. To investigate this we recorded brain activity with magnetoencephalography and collected intracranial data from epilepsy patients while participants listened to familiar as well as unknown music that was partly replaced by sections of pink noise. We hypothesized that participants have a stronger experience of hearing music throughout noise when the noise sections are embedded in familiar compared to unfamiliar music. This was supported by the behavioral results showing that participants rated the perception of music during noise as stronger when noise was presented in a familiar context. Time-frequency data show that the illusory perception of music is associated with a decrease in auditory alpha power pointing to increased auditory cortex excitability. Furthermore, the right auditory cortex is concurrently synchronized with the medial temporal lobe, putatively mediating memory aspects associated with the music illusion. We thus assume that neuronal activity in the highly excitable auditory cortex is shaped through extensive communication between the auditory cortex and the medial temporal lobe, thereby generating the illusion of hearing music during noise.

  6. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... parameters highlighting harmonious and balanced qualities while criticizing the noisy and cacophonous qualities of modern urban settings. This paper present a reaffirmation of Schafer’s central methodological claim: that environments can be analyzed through their sound, but offers considerations on the role...... musicalized through electro acoustic equipment installed in shops, shopping streets, transit areas etc. Urban noise no longer acts only as disturbance, but also structure and shape the places and spaces in which urban life enfold. Based on research done in Japanese shopping streets and in Copenhagen the paper...

  7. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  8. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  9. Integrating real-time and manual monitored data to predict hillslope soil moisture dynamics with high spatio-temporal resolution using linear and non-linear models

    Science.gov (United States)

    Zhu, Qing; Zhou, Zhiwen; Duncan, Emily W.; Lv, Ligang; Liao, Kaihua; Feng, Huihui

    2017-02-01

    Spatio-temporal variability of soil moisture (θ) is a challenge that remains to be better understood. A trade-off exists between spatial coverage and temporal resolution when using the manual and real-time θ monitoring methods. This restricted the comprehensive and intensive examination of θ dynamics. In this study, we integrated the manual and real-time monitored data to depict the hillslope θ dynamics with good spatial coverage and temporal resolution. Linear (stepwise multiple linear regression-SMLR) and non-linear (support vector machines-SVM) models were used to predict θ at 39 manual sites (collected 1-2 times per month) with θ collected at three real-time monitoring sites (collected every 5 mins). By comparing the accuracies of SMLR and SVM for each depth and manual site, an optimal prediction model was then determined at this depth of this site. Results showed that θ at the 39 manual sites can be reliably predicted (root mean square errors model. The subsurface flow dynamics was an important factor that determined whether the relationship was linear or non-linear. Depth to bedrock, elevation, topographic wetness index, profile curvature, and θ temporal stability influenced the selection of prediction model since they were related to the subsurface soil water distribution and movement. Using this approach, hillslope θ spatial distributions at un-sampled times and dates can be predicted. Missing information of hillslope θ dynamics can be acquired successfully.

  10. Tonotopic organization of human auditory association cortex.

    Science.gov (United States)

    Cansino, S; Williamson, S J; Karron, D

    1994-11-07

    Neuromagnetic studies of responses in human auditory association cortex for tone burst stimuli provide evidence for a tonotopic organization. The magnetic source image for the 100 ms component evoked by the onset of a tone is qualitatively similar to that of primary cortex, with responses lying deeper beneath the scalp for progressively higher tone frequencies. However, the tonotopic sequence of association cortex in three subjects is found largely within the superior temporal sulcus, although in the right hemisphere of one subject some sources may be closer to the inferior temporal sulcus. The locus of responses for individual subjects suggests a progression across the cortical surface that is approximately proportional to the logarithm of the tone frequency, as observed previously for primary cortex, with the span of 10 mm for each decade in frequency being comparable for the two areas.

  11. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  12. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  13. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; Röder, Brigitte;

    2013-01-01

    Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while...... they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG...

  14. Rhythm implicitly affects temporal orienting of attention across modalities.

    Science.gov (United States)

    Bolger, Deirdre; Trost, Wiebke; Schön, Daniele

    2013-02-01

    Here we present two experiments investigating the implicit orienting of attention over time by entrainment to an auditory rhythmic stimulus. In the first experiment, participants carried out a detection and discrimination tasks with auditory and visual targets while listening to an isochronous, auditory sequence, which acted as the entraining stimulus. For the second experiment, we used musical extracts as entraining stimulus, and tested the resulting strength of entrainment with a visual discrimination task. Both experiments used reaction times as a dependent variable. By manipulating the appearance of targets across four selected metrical positions of the auditory entraining stimulus we were able to observe how entraining to a rhythm modulates behavioural responses. That our results were independent of modality gives a new insight into cross-modal interactions between auditory and visual modalities in the context of dynamic attending to auditory temporal structure.

  15. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  16. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  17. Electrophysiological correlates of individual differences in perception of audiovisual temporal asynchrony.

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-06-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability -100 and 300ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets.

  18. Effect of auditory integration training on neuropsychological development in children with expressive language disorder%听觉统合训练对表达性语言障碍儿童神经心理发育的作用

    Institute of Scientific and Technical Information of China (English)

    钱沁芳; 欧萍; 王章琼; 余秋娟; 谢燕钦; 杨式薇; 黄艳; 卢国斌; 杨闽燕

    2016-01-01

    目的 探讨听觉统合训练(auditory integration training,AIT)治疗表达性语言障碍儿童的近期效果.方法 168例表达性语言障碍儿童随机分成治疗组1(A组)、治疗组2(B组)、对照组(C组)和空白对照组(D组),每组各42例.运用脑干诱发电位仪行ASSR测试判定听觉敏感频率,数码听觉统合训练仪进行AIT治疗.采用0~6岁小儿神经心理发育量表、AIT疗效调查表,以治疗前后3个月适应能力、语言、社交行为发育商评分差异及A组训练前后1个月、3个月临床症状的变化来评估整体疗效.结果 治疗3个月后,A、B组语言DQ、社交行为DQ较治疗前明显提高(t=-12.104~-2.790,均P<0.01),而C组3个能区的DQ提高不明显(t=-1.655-1.193,均P>0.05),D组3个能区的DQ较治疗前明显降低(t=2.509~3.371,均P<0.05).治疗后3个月四组患儿语言及社交行为DQ差异有统计学意义(F=16.192 ~ 35.544,均P<0.01).B组语言DQ、社交行为DQ分值最高[(82.90±10.39)分,(86.51±7.47)分],A组其次[(73.75±15.45)分,(83.91±9.20)分].A、B组与空白对照组适应能力DQ比较差异有统计学意义(P<0.05),A组与对照组语言、社交行为2个能区的DQ比较差异有统计学意义(P<0.05).A组训练1个月后在语言、社会交往、情绪上平均总有效率为78.57%、65.24%、19.05%;训练3个月后在语言、社会交往、情绪上平均总有效率为86.31%、73.81%、43.65%.结论 AIT能提高表达性语言障碍儿童语言、社交行为的发育水平,对改善表达性语言障碍儿童的语言表达、社会交往能力及情绪有一定疗效.%Objective To explore the short-term efficacy of auditory integration training (AIT) on neuropsychological development in children with expressive language disorder.Methods 168 cases diagnosed as expressive language disorder children were randomly divided into 4 groups with 42 cases in each group,namely as the treatment group 1 (Group A),the treatment

  19. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  20. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...... aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input...

  1. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  2. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  3. CT findings of the osteoma of the external auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ha Young; Song, Chang Joon; Yoon, Chung Dae; Park, Mi Hyun; Shin, Byung Seok [Chungnam National University, School of Medicine, Daejeon (Korea, Republic of)

    2006-07-15

    We wanted to report the CT image findings of the osteoma of the external auditory canal. Temporal bone CT scanning was performed on eight patients (4 males and 4 females aged between 8 and 41 years) with pathologically proven osteoma of the external auditory canal after operation, and the findings of the CT scanning were retrospectively reviewed. Not only did we analyze the size, shape, distribution and location of the osteomas, we also analyzed the relationship between the lesion and the tympanosqumaous or tympanomastoid suture line, and the changes seen on the CT scan images for the patients who were able to undergo follow-up. All the lesions of the osteoma of the external auditory canal were unilateral, solitary, pedunculated bony masses. In five patients, the osteomas occurred on the left side and for the other three patients, the osteomas occurred on the right side. The average size of the osteoma was 0.6 cm with the smallest being 0.5 cm and the largest being 1.2 cm. Each of the lesions was located at the osteochondral junction in the terminal part of the osseous external ear canal. The stalk of the osteoma of the external auditory canal was found to have occurred in the anteroinferior wall in five cases (63%), in the anterosuperior wall (the tympanosqumaous suture line) in two cases (25%), and in the anterior wall in one case. The osteoma of the external auditory canal was a compact form in five cases and it was a cancellous form in three cases. One case of the cancellous form was changed into a compact form 35 months later due to the advanced ossification. Osteoma of the external auditory canal developed in a unilateral and solitary fashion. The characteristic image findings show that it is attached to the external auditory canal by its stalk. Unlike our common knowledge about its occurrence, osteoma mostly occurred in the tympanic wall, and this is regardless of the tympanosquamous or tympanomastoid suture line.

  4. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  5. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  6. Merging functional and structural properties of the monkey auditory cortex

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-07-01

    Full Text Available Recent neuroimaging studies in primates aim to define the functional properties of auditory cortical areas, especially areas beyond A1, in order to further our understanding of the auditory cortical organization. Precise mapping of functional magnetic resonance imaging (fMRI results and interpretation of their localizations among all the small auditory subfields remains challenging. To facilitate this mapping, we combined here information from cortical folding, micro-anatomy, surface-based atlas and tonotopic mapping. We used for the first time, phase-encoded fMRI design for mapping the monkey tonotopic organization. From posterior to anterior, we found a high-low-high progression of frequency preference on the superior temporal plane. We show a faithful representation of the fMRI results on a locally flattened surface of the superior temporal plane. In a tentative scheme to delineate core versus belt regions which share similar tonotopic organizations we used the ratio of T1-weighted and T2-weighted MR images as a measure of cortical myelination. Our results, presented along a co-registered surface-based atlas, can be interpreted in terms of a current model of the monkey auditory cortex.

  7. Brain Bases for Auditory Stimulus-Driven Figure-Ground Segregation

    OpenAIRE

    Teki, S.; Chait, M.; Kumar, S.; von Kriegstein, K.; Griffiths, T.D.

    2011-01-01

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the ri...

  8. Brain Bases for Auditory Stimulus-Driven Figure–Ground Segregation

    OpenAIRE

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Timothy D Griffiths

    2011-01-01

    Auditory figure–ground segregation, listeners’ ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure–ground stimulus that incorporates stochastic variation of the figure and background that captures the ri...

  9. Effects of auditory rhythm and music on gait disturbances in Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Aidin eAshoori

    2015-11-01

    Full Text Available Gait abnormalities such as shuffling steps, start hesitation, and freezing are common and often incapacitating symptoms of Parkinson’s disease (PD and other parkinsonian disorders. Pharmacological and surgical approaches have only limited efficacy in treating these gait disorders. Rhythmic auditory stimulation (RAS, such as playing marching music or dance therapy, has been shown to be a safe, inexpensive, and an effective method in improving gait in PD patients. However, RAS that adapts to patients’ movements may be more effective than rigid, fixed-tempo RAS used in most studies. In addition to auditory cueing, immersive virtual reality technologies that utilize interactive computer-generated systems through wearable devices are increasingly used for improving brain-body interaction and sensory-motor integration. Using multisensory cues, these therapies may be particularly suitable for the treatment of parkinsonian freezing and other gait disorders. In this review, we examine the affected neurological circuits underlying gait and temporal processing in PD patients and summarize the current studies demonstrating the effects of RAS on improving these gait deficits.

  10. Interaction of speech and script in human auditory cortex: insights from neuro-imaging and effective connectivity.

    Science.gov (United States)

    van Atteveldt, Nienke; Roebroeck, Alard; Goebel, Rainer

    2009-12-01

    In addition to visual information from the face of the speaker, a less natural, but nowadays extremely important visual component of speech is its representation in script. In this review, neuro-imaging studies are examined which were aimed to understand how speech and script are associated in the adult "literate" brain. The reviewed studies focused on the role of different stimulus and task factors and effective connectivity between different brain regions. The studies will be summarized in a neural mechanism for the integration of speech and script that can serve as a basis for future studies addressing (the failure of) literacy acquisition. In this proposed mechanism, speech sound processing in auditory cortex is modulated by co-presented visual letters, depending on the congruency of the letter-sound pairs. Other factors of influence are temporal correspondence, input quality and task instruction. We present results showing that the modulation of auditory cortex is most likely mediated by feedback from heteromodal areas in the superior temporal cortex, but direct influences from visual cortex are not excluded. The influence of script on speech sound processing occurs automatically and shows extended development during reading acquisition. This review concludes with suggestions to answer currently still open questions to get closer to understanding the neural basis of normal and impaired literacy.

  11. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  12. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  13. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  14. Processing of location and pattern changes of natural sounds in the human auditory cortex.

    Science.gov (United States)

    Altmann, Christian F; Bledowski, Christoph; Wibral, Michael; Kaiser, Jochen

    2007-04-15

    Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.

  15. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  16. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.

    Science.gov (United States)

    Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.

  17. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex

    Science.gov (United States)

    Zhuo, Ran; Xue, Hongbo; Chambers, Anna R.; Kolaczyk, Eric; Polley, Daniel B.

    2016-01-01

    Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices. PMID:27622211

  18. Brief Report: Which Came First? Exploring Crossmodal Temporal Order Judgements and Their Relationship with Sensory Reactivity in Autism and Neurotypicals

    Science.gov (United States)

    Poole, Daniel; Gowen, Emma; Warren, Paul A.; Poliakoff, Ellen

    2017-01-01

    Previous studies have indicated that visual-auditory temporal acuity is reduced in children with autism spectrum conditions (ASC) in comparison to neurotypicals. In the present study we investigated temporal acuity for all possible bimodal pairings of visual, tactile and auditory information in adults with ASC (n = 18) and a matched control group…

  19. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  20. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  1. Temporal Multisensory Processing and its Relationship to Autistic Functioning

    Directory of Open Access Journals (Sweden)

    Leslie D Kwakye

    2011-10-01

    Full Text Available Autism spectrum disorders (ASD form a continuum of neurodevelopmental disorders characterized by deficits in communication and reciprocal social interaction, repetitive behaviors, and restricted interests. Sensory disturbances are also frequently reported in clinical and autobiographical accounts. However, few empirical studies have characterized the fundamental features of sensory and multisensory processing in ASD. Recently published studies have shown that children with ASD are able to integrate low-level multisensory stimuli, but do so over an enlarged temporal window when compared with typically developing (TD children. The current study sought to expand upon these previous findings by examining differences in the temporal processing of low-level multisensory stimuli in high-functioning (HFA and low-functioning (LFA children with ASD in the context of a simple reaction time task. Contrary to these previous findings, children with both HFA and LFA showed smaller gains in performance under multisensory (ie, combined visual-auditory conditions when compared with their TD peers. Additionally, the pattern of performance gains as a function of SOA was similar across groups, suggesting similarities in the temporal processing of these cues that run counter to previous studies that have shown an enlarged “temporal window.” These findings add complexity to our understanding of the multisensory processing of low-level stimuli in ASD and may hold promise for the development of more sensitive diagnostic measures and improved remediation strategies in autism.

  2. The critical role of Golgi cells in regulating spatio-temporal integration and plasticity at the cerebellum input stage

    Directory of Open Access Journals (Sweden)

    2008-07-01

    Full Text Available After the discovery at the end of the 19th century (Golgi, 1883, the Golgi cell was precisely described by S.R. y Cajal (see Cajal, 1987, 1995 and functionally identified as an inhibitory interneuron 50 years later by J.C. Eccles and colleagues (Eccles e al., 1967. Then, its role has been casted by Marr (1969 within the Motor Learning Theory as a codon size regulator of granule cell activity. It was immediately clear that Golgi cells had to play a critical role, since they are the main inhibitory interneuron of the granular layer and control activity of as many as 100 millions granule cells. In vitro, Golgi cells show pacemaking, resonance, phase-reset and rebound-excitation in the theta-frequency band. These properties are likely to impact on their activity in vivo, which shows irregular spontaneous beating modulated by sensory inputs and burst responses to punctuate stimulation followed by a silent pause. Moreover, investigations have given insight into Golgi cells connectivity within the cerebellar network and on their impact on the spatio-temporal organization of activity. It turns out that Golgi cells can control both the temporal dynamics and the spatial distribution of information transmitted through the cerebellar network. Moreover, Golgi cells regulate the induction of long-term synaptic plasticity at the mossy fiber - granule cell synapse. Thus, the concept is emerging that Golgi cells are of critical importance for regulating granular layer network activity bearing important consequences for cerebellar computation as a whole.

  3. Investigation of spatial resolution and temporal performance of SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout) with integrated electrostatic focusing

    Science.gov (United States)

    Scaduto, David A.; Lubinsky, Anthony R.; Rowlands, John A.; Kenmotsu, Hidenori; Nishimoto, Norihito; Nishino, Takeshi; Tanioka, Kenkichi; Zhao, Wei

    2014-03-01

    We have previously proposed SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout), a novel detector concept with potentially superior spatial resolution and low-dose performance compared with existing flat-panel imagers. The detector comprises a scintillator that is optically coupled to an amorphous selenium photoconductor operated with avalanche gain, known as high-gain avalanche rushing photoconductor (HARP). High resolution electron beam readout is achieved using a field emitter array (FEA). This combination of avalanche gain, allowing for very low-dose imaging, and electron emitter readout, providing high spatial resolution, offers potentially superior image quality compared with existing flat-panel imagers, with specific applications to fluoroscopy and breast imaging. Through the present collaboration, a prototype HARP sensor with integrated electrostatic focusing and nano- Spindt FEA readout technology has been fabricated. The integrated electron-optic focusing approach is more suitable for fabricating large-area detectors. We investigate the dependence of spatial resolution on sensor structure and operating conditions, and compare the performance of electrostatic focusing with previous technologies. Our results show a clear dependence of spatial resolution on electrostatic focusing potential, with performance approaching that of the previous design with external mesh-electrode. Further, temporal performance (lag) of the detector is evaluated and the results show that the integrated electrostatic focusing design exhibits comparable or better performance compared with the mesh-electrode design. This study represents the first technical evaluation and characterization of the SAPHIRE concept with integrated electrostatic focusing.

  4. Spectral and temporal properties of the supergiant fast X-ray transient IGR J18483-0311 observed by INTEGRAL

    CERN Document Server

    Ducci, L; Sasaki, M; Santangelo, A; Esposito, P; Romano, P; Vercellone, S

    2013-01-01

    IGR J18483-0311 is a supergiant fast X-ray transient whose compact object is located in a wide (18.5 d) and eccentric (e~0.4) orbit, which shows sporadic outbursts that reach X-ray luminosities of ~1e36 erg/s. We investigated the timing properties of IGR J18483-0311 and studied the spectra during bright outbursts by fitting physical models based on thermal and bulk Comptonization processes for accreting compact objects. We analysed archival INTEGRAL data collected in the period 2003-2010, focusing on the observations with IGR J18483-0311 in outburst. We searched for pulsations in the INTEGRAL light curves of each outburst. We took advantage of the broadband observing capability of INTEGRAL for the spectral analysis. We observed 15 outbursts, seven of which we report here for the first time. This data analysis almost doubles the statistics of flares of this binary system detected by INTEGRAL. A refined timing analysis did not reveal a significant periodicity in the INTEGRAL observation where a ~21s pulsation w...

  5. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  6. Tactile stimulation and hemispheric asymmetries modulate auditory perception and neural responses in primary auditory cortex.

    Science.gov (United States)

    Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T

    2013-10-01

    Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity.

  7. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  8. Spatial attention alleviates temporal crowding, but neither temporal nor spatial uncertainty are necessary for the emergence of temporal crowding.

    Science.gov (United States)

    Tkacz-Domb, Shira; Yeshurun, Yaffa

    2017-03-01

    Recently, we demonstrated temporal crowding with normal observers: Target identification was impaired when it was surrounded by other stimuli in time, even when the interstimuli intervals (ISIs) were relatively long. Here, we examined whether temporal and spatial uncertainties play a critical role in the emergence of temporal crowding. We presented a sequence of three letters to the same peripheral location, right or left of fixation, separated by varying ISI (106-459 ms). One of these letters was the target, and the observers indicated its orientation. To eliminate temporal uncertainty, the position of the target within the sequence was fixed for an entire block (Experiment 1). To eliminate spatial uncertainty, we employed spatial attentional precues that indicated the letters' location. The precue was either auditory (Experiment 2) or visual (Experiment 3). We found temporal crowding to result in worse performance with shorter ISIs, even when there was no temporal or spatial uncertainty. Unlike the auditory cue, the visual cue affected performance. Specifically, when there was uncertainty regarding the target location (i.e., when the target appeared in the first display), precueing the target location improved overall performance and reduced the ISI effect, although it was not completely eliminated. These results suggest that temporal and spatial uncertainties are not necessary for the emergence of temporal crowding and that spatial attention can reduce temporal crowding.

  9. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  10. Auditory evoked potentials in postconcussive syndrome.

    Science.gov (United States)

    Drake, M E; Weate, S J; Newell, S A

    1996-12-01

    The neuropsychiatric sequelae of minor head trauma have been the source of controversy. Most clinical and imaging studies have shown no alteration after concussion, but neuropsychological and neuropathological abnormalities have been reported. Some changes in neurophysiologic diagnostic tests have been described in postconcussive syndrome. We recorded middle latency auditory evoked potentials (MLR) and slow vertex responses (SVR) in 20 individuals with prolonged cognitive difficulties, behavior changes, dizziness, and headache after concussion. MLR is utilized alternating polarity clicks presented monaurally at 70 dB SL at 4 per second, with 40 dB contralateral masking. Five hundred responses were recorded and replicated from Cz-A1 and Cz-A2, with 50 ms. analysis time and 20-1000 Hz filter band pass. SVRs were recorded with the same montage, but used rarefaction clicks, 0.5 Hz stimulus rate, 500 ms. analysis time, and 1-50 Hz filter band pass. Na and Pa MLR components were reduced in amplitude in postconcussion patients. Pa latency was significantly longer in patients than in controls. SVR amplitudes were longer in concussed individuals, but differences in latency and amplitude were not significant. These changes may reflect posttraumatic disturbance in presumed subcortical MLR generators, or in frontal or temporal cortical structures that modulate them. Middle and long-latency auditory evoked potentials may be helpful in the evaluation of postconcussive neuropsychiatric symptoms.

  11. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  12. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  13. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    Science.gov (United States)

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  14. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  15. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  16. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  17. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.

  18. Phonetic feature encoding in human superior temporal gyrus.

    Science.gov (United States)

    Mesgarani, Nima; Cheung, Connie; Johnson, Keith; Chang, Edward F

    2014-02-28

    During speech perception, linguistic elements such as consonants and vowels are extracted from a complex acoustic speech signal. The superior temporal gyrus (STG) participates in high-order auditory processing of speech, but how it encodes phonetic information is poorly understood. We used high-density direct cortical surface recordings in humans while they listened to natural, continuous speech to reveal the STG representation of the entire English phonetic inventory. At single electrodes, we found response selectivity to distinct phonetic features. Encoding of acoustic properties was mediated by a distributed population response. Phonetic features could be directly related to tuning for spectrotemporal acoustic cues, some of which were encoded in a nonlinear fashion or by integration of multiple cues. These findings demonstrate the acoustic-phonetic representation of speech in human STG.

  19. Phonetic Feature Encoding in Human Superior Temporal Gyrus

    Science.gov (United States)

    Mesgarani, Nima; Cheung, Connie; Johnson, Keith; Chang, Edward F.

    2015-01-01

    During speech perception, linguistic elements such as consonants and vowels are extracted from a complex acoustic speech signal. The superior temporal gyrus (STG) participates in high-order auditory processing of speech, but how it encodes phonetic information is poorly understood. We used high-density direct cortical surface recordings in humans while they listened to natural, continuous speech to reveal the STG representation of the entire English phonetic inventory. At single electrodes, we found response selectivity to distinct phonetic features. Encoding of acoustic properties was mediated by a distributed population response. Phonetic features could be directly related to tuning for spectrotemporal acoustic cues, some of which were encoded in a nonlinear fashion or by integration of multiple cues. These findings demonstrate the acoustic-phonetic representation of speech in human STG. PMID:24482117

  20. Integrated, multi-scale, spatial-temporal cell biology--A next step in the post genomic era.

    Science.gov (United States)

    Horwitz, Rick

    2016-03-01

    New microscopic approaches, high-throughput imaging, and gene editing promise major new insights into cellular behaviors. When coupled with genomic and other 'omic information and "mined" for correlations and associations, a new breed of powerful and useful cellular models should emerge. These top down, coarse-grained, and statistical models, in turn, can be used to form hypotheses merging with fine-grained, bottom up mechanistic studies and models that are the back bone of cell biology. The goal of the Allen Institute for Cell Science is to develop the top down approach by developing a high throughput microscopy pipeline that is integrated with modeling, using gene edited hiPS cell lines in various physiological and pathological contexts. The output of these experiments and models will be an "animated" cell, capable of integrating and analyzing image data generated from experiments and models.

  1. Spectral and temporal properties of long GRBs detected by INTEGRAL from 3 keV to 8 MeV

    DEFF Research Database (Denmark)

    Martin-Carrillo, A.; Topinka, M.; Hanlon, L.;

    2010-01-01

    Since its launch in 2002, INTEGRAL has triggered onmore than 78 g –ray bursts in the 20-200 keV energy range with the IBIS/ISGRI instrument. Almost 30% of these bursts occurred within the fully coded field of view of the JEM-X detector (5) which operates in the 3-35 keV energy range. A detailed...

  2. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  3. Head Tracking of Auditory, Visual, and Audio-Visual Targets.

    Science.gov (United States)

    Leung, Johahn; Wei, Vincent; Burgess, Martin; Carlile, Simon

    2015-01-01

    The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual "bisensory" stimuli. Three metrics were measured-onset, RMS, and gain error. The results showed that tracking accuracy (RMS error) varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  4. Brainstem auditory evoked potentials in children with lead exposure

    Directory of Open Access Journals (Sweden)

    Katia de Freitas Alvarenga

    2015-02-01

    Full Text Available Introduction: Earlier studies have demonstrated an auditory effect of lead exposure in children, but information on the effects of low chronic exposures needs to be further elucidated. Objective: To investigate the effect of low chronic exposures of the auditory system in children with a history of low blood lead levels, using an auditory electrophysiological test. Methods: Contemporary cross-sectional cohort. Study participants underwent tympanometry, pure tone and speech audiometry, transient evoked otoacoustic emissions, and brainstem auditory evoked potentials, with blood lead monitoring over a period of 35.5 months. The study included 130 children, with ages ranging from 18 months to 14 years, 5 months (mean age 6 years, 8 months ± 3 years, 2 months. Results: The mean time-integrated cumulative blood lead index was 12 µg/dL (SD ± 5.7, range:2.433. All participants had hearing thresholds equal to or below 20 dBHL and normal amplitudes of transient evoked otoacoustic emissions. No association was found between the absolute latencies of waves I, III, and V, the interpeak latencies I-III, III-V, and I-V, and the cumulative lead values. Conclusion: No evidence of toxic effects from chronic low lead exposures was observed on the auditory function of children living in a lead contaminated area.

  5. Head Tracking of Auditory, Visual and Audio-Visual Targets

    Directory of Open Access Journals (Sweden)

    Johahn eLeung

    2016-01-01

    Full Text Available The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20°/s to 110°/s. By integrating high fidelity virtual auditory space with a high-speed visual presentation we compared tracking responses of auditory targets against visual-only and audio-visual bisensory stimuli. Three metrics were measured – onset, RMS and gain error. The results showed that tracking accuracy (RMS error varied linearly with target velocity, with a significantly higher rate in audition. Also, when the target moved faster than 80°/s, onset and RMS error were significantly worst in audition the other modalities while responses in the visual and bisensory conditions were statistically identical for all metrics measured. Lastly, audio-visual facilitation was not observed when tracking bisensory targets.

  6. Placing prairie pothole wetlands along spatial and temporal continua to improve integration of wetland function in ecological investigations

    Science.gov (United States)

    Euliss, Ned H.; Mushet, David M.; Newton, Wesley E.; Otto, Clint R.V.; Nelson, Richard D.; LaBaugh, James W.; Scherff, Eric J.; Rosenberry, Donald O.

    2014-01-01

    We evaluated the efficacy of using chemical characteristics to rank wetland relation to surface and groundwater along a hydrologic continuum ranging from groundwater recharge to groundwater discharge. We used 27 years (1974–2002) of water chemistry data from 15 prairie pothole wetlands and known hydrologic connections of these wetlands to groundwater to evaluate spatial and temporal patterns in chemical characteristics that correspond to the unique ecosystem functions each wetland performed. Due to the mineral content and the low permeability rate of glacial till and soils, salinity of wetland waters increased along a continuum of wetland relation to groundwater recharge, flow-through or discharge. Mean inter-annual specific conductance (a proxy for salinity) increased along this continuum from wetlands that recharge groundwater being fresh to wetlands that receive groundwater discharge being the most saline, and wetlands that both recharge and discharge to groundwater (i.e., groundwater flow-through wetlands) being of intermediate salinity. The primary axis from a principal component analysis revealed that specific conductance (and major ions affecting conductance) explained 71% of the variation in wetland chemistry over the 27 years of this investigation. We found that long-term averages from this axis were useful to identify a wetland’s long-term relation to surface and groundwater. Yearly or seasonal measurements of specific conductance can be less definitive because of highly dynamic inter- and intra-annual climate cycles that affect water volumes and the interaction of groundwater and geologic materials, and thereby influence the chemical composition of wetland waters. The influence of wetland relation to surface and groundwater on water chemistry has application in many scientific disciplines and is especially needed to improve ecological understanding in wetland investigations. We suggest ways that monitoring in situ wetland conditions could be linked

  7. Temporal Succession of Phytoplankton Assemblages in a Tidal Creek System of the Sundarbans Mangroves: An Integrated Approach

    Directory of Open Access Journals (Sweden)

    Dola Bhattacharjee

    2013-01-01

    Full Text Available Sundarbans, the world's largest mangrove ecosystem, is unique and biologically diverse. A study was undertaken to track temporal succession of phytoplankton assemblages at the generic level (≥10 µm encompassing 31 weeks of sampling (June 2010–May 2011 in Sundarbans based on microscopy and hydrological measurements. As part of this study, amplification and sequencing of type ID rbcL subunit of RuBisCO enzyme were also applied to infer chromophytic algal groups (≤10 µm size from one of the study points. We report the presence of 43 genera of Bacillariophyta, in addition to other phytoplankton groups, based on microscopy. Phytoplankton cell abundance, which was highest in winter and spring, ranged between 300 and 27,500 cells/L during this study. Cell biovolume varied between winter of 2010 (90–35281.04 µm3 and spring-summer of 2011 (52–33962.24 µm3. Winter supported large chain forming diatoms, while spring supported small sized diatoms, followed by other algal groups in summer. The clone library approach showed dominance of Bacillariophyta-like sequences, in addition to Cryptophyta-, Haptophyta-, Pelagophyta-, and Eustigmatophyta-like sequences which were detected for the first time highlighting their importance in mangrove ecosystem. This study clearly shows that a combination of microscopy and molecular tools can improve understanding of phytoplankton assemblages in mangrove environments.

  8. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Science.gov (United States)

    Schwartz, Jean-Luc; Savariaux, Christophe

    2014-07-01

    An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  9. No, there is no 150 ms lead of visual speech on auditory speech, but a range of audiovisual asynchronies varying from small audio lead to large audio lag.

    Directory of Open Access Journals (Sweden)

    Jean-Luc Schwartz

    2014-07-01

    Full Text Available An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call "preparatory gestures". However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call "comodulatory gestures" providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.

  10. Perceptual training narrows the temporal window of multisensory binding

    Science.gov (United States)

    Powers, Albert R.; Hillock, Andrea R.; Wallace, Mark T.

    2009-01-01

    The brain’s ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be bound together and perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Here, the plasticity in the size of this temporal window was investigated using a perceptual learning paradigm in which participants were given feedback during a two-alternative forced-choice (2-AFC) audiovisual simultaneity judgment task. Training resulted in a marked (i.e., approximately 40%) narrowing in the size of the window. To rule out the possibility that this narrowing was the result of changes in cognitive biases, a second experiment employing a two-interval forced choice (2-IFC) paradigm was undertaken during which participants were instructed to identify a simultaneously-presented audiovisual pair presented within one of two intervals. The 2-IFC paradigm resulted in a narrowing that was similar in both degree and dynamics to that using the 2-AFC approach. Together, these results illustrate that different methods of multisensory perceptual training can result in substantial alterations in the circuits underlying the perception of audiovisual simultaneity. These findings suggest a high degree of flexibility in multisensory temporal processing and have important implications for interventional strategies that may be used to ameliorate clinical conditions (e.g., autism, dyslexia) in which multisensory temporal function may be impaired. PMID:19793985

  11. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  12. Effect of stimulus hemifield on free-field auditory saltation.

    Science.gov (United States)

    Ishigami, Yoko; Phillips, Dennis P

    2008-07-01

    Auditory saltation is the orderly misperception of the spatial location of repetitive click stimuli emitted from two successive locations when the inter-click intervals (ICIs) are sufficiently short. The clicks are perceived as originating not only from the actual source locations, but also from locations between them. In two tasks, the present experiment compared free-field auditory saltation for 90 degrees excursions centered in the frontal, rear, left and right acoustic hemifields, by measuring the ICI at which subjects report 50% illusion strength (subjective task) and the ICI at which subjects could not distinguish real motion from saltation (objective task). A comparison of the saltation illusion for excursions spanning the midline (i.e. for frontal or rear hemifields) with that for stimuli in the lateral hemifields (left or right) revealed that the illusion was weaker for the midline-straddling conditions (i.e. the illusion was restricted to shorter ICIs). This may reflect the contribution of two perceptual channels to the task in the midline conditions (as opposed to one in the lateral hemifield conditions), or the fact that the temporal dynamics of localization differ between the midline and lateral hemifield conditions. A subsidiary comparison of saltation supported in the left and right auditory hemifields, and therefore by the right and left auditory forebrains, revealed no difference.

  13. Thalamic and parietal brain morphology predicts auditory category learning.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.

  14. Vestibular receptors contribute to cortical auditory evoked potentials.

    Science.gov (United States)

    Todd, Neil P M; Paillard, Aurore C; Kluk, Karolina; Whittle, Elizabeth; Colebatch, James G

    2014-03-01

    Acoustic sensitivity of the vestibular apparatus is well-established, but the contribution of vestibular receptors to the late auditory evoked potentials of cortical origin is unknown. Evoked potentials from 500 Hz tone pips were recorded using 70 channel EEG at several intensities below and above the vestibular acoustic threshold, as determined by vestibular evoked myogenic potentials (VEMPs). In healthy subjects both auditory mid- and long-latency auditory evoked potentials (AEPs), consisting of Na, Pa, N1 and P2 waves, were observed in the sub-threshold conditions. However, in passing through the vestibular threshold, systematic changes were observed in the morphology of the potentials and in the intensity dependence of their amplitude and latency. These changes were absent in a patient without functioning vestibular receptors. In particular, for the healthy subjects there was a fronto-central negativity, which appeared at about 42 ms, referred to as an N42, prior to the AEP N1. Source analysis of both the N42 and N1 indicated involvement of cingulate cortex, as well as bilateral superior temporal cortex. Our findings are best explained by vestibular receptors contributing to what were hitherto considered as purely auditory evoked potentials and in addition tentatively identify a new component that appears to be primarily of vestibular origin.

  15. Visual motion integration by neurons in the middle temporal area of a New World monkey, the marmoset.

    Science.gov (United States)

    Solomon, Selina S; Tailby, Chris; Gharaei, Saba; Camp, Aaron J; Bourne, James A; Solomon, Samuel G

    2011-12-01

    The middle temporal area (MT/V5) is an anatomically distinct region of primate visual cortex that is specialized for the processing of image motion. It is generally thought that some neurons in area MT are capable of signalling the motion of complex patterns, but this has only been established in the macaque monkey. We made extracellular recordings from single units in area MT of anaesthetized marmosets, a New World monkey. We show through quantitative analyses that some neurons (35 of 185; 19%) are capable of signalling pattern motion ('pattern cells'). Across several dimensions, the visual response of pattern cells in marmosets is indistinguishable from that of pattern cells in macaques. Other neurons respond to the motion of oriented contours in a pattern ('component cells') or show intermediate properties. In addition, we encountered a subset of neurons (22 of 185; 12%) insensitive to sinusoidal gratings but very responsive to plaids and other two-dimensional patterns and otherwise indistinguishable from pattern cells. We compared the response of each cell class to drifting gratings and dot fields. In pattern cells, directional selectivity was similar for gratings and dot fields; in component cells, directional selectivity was weaker for dot fields than gratings. Pattern cells were more likely to have stronger suppressive surrounds, prefer lower spatial frequencies and prefer higher speeds than component cells. We conclude that pattern motion sensitivity is a feature of some neurons in area MT of both New and Old World monkeys, suggesting that this functional property is an important stage in motion analysis and is likely to be conserved in humans.

  16. Task-dependent calibration of auditory spatial perception through environmental visual observation.

    Science.gov (United States)

    Tonelli, Alessia; Brayda, Luca; Gori, Monica

    2015-01-01

    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.

  17. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael

    2014-01-01

    The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate

  18. Emotional words induce enhanced brain activity in schizophrenic patients with auditory hallucinations.

    Science.gov (United States)

    Sanjuan, Julio; Lull, Juan J; Aguilar, Eduardo J; Martí-Bonmatí, Luis; Moratal, David; Gonzalez, José C; Robles, Montserrat; Keshavan, Matcheri S

    2007-01-15

    Neuroimaging studies of emotional response in schizophrenia have mainly used visual (faces) paradigms and shown globally reduced brain activity. None of these studies have used an auditory paradigm. Our principal aim is to evaluate the emotional response of patients with schizophrenia to neutral and emotional words. An auditory emotional paradigm based on the most frequent words heard by psychotic patients with auditory hallucinations was designed. This paradigm was applied to evaluate cerebral activation with functional magnetic resonance imaging (fMRI) in 11 patients with schizophrenia with persistent hallucinations and 10 healthy subjects. We found a clear enhanced activity of the frontal lobe, temporal cortex, insula, cingulate, and amygdala (mainly right side) in patients when hearing emotional words in comparison with controls. Our findings are consistent with other studies suggesting a relevant role for emotional response in the pathogenesis and treatment of auditory hallucinations.

  19. Integration Of Spa Tio- Temporal Analysis Of Rainfall And Community Information System To Reduce Landslide Risk In Indonesia

    Directory of Open Access Journals (Sweden)

    Sudibyakto .

    2013-07-01

    Full Text Available Indonesia is vulnerable to many type of disasters including natural and anthropogenic disasters. Indonesian seasonal rainfall also shows inter annual variation. Sediment-related disaster such as landslide is the mostji-equent disaster occurred and significantly was impacted to natural, human. and social environment. Although. many disaster mitigation e.Oorts have been conducted to reduce disaster risk there are still urgently need to improve the early 1varning .\\~ystem by communicating the risk into local community. Integration qf spatialtemporal analysis qf rainfall and disaster management information !o~vstem would be required to improve the better disaster management in Indonesia. Application of Disaster A1anagement Information System in the study area will presented including evacuation map that used by the local community.

  20. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  1. Temporal processes involved in simultaneous reflection masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg

    2006-01-01

    reflection delays and enhances the test reflection for large delays. Employing a 200-ms-long broadband noise burst as input signal, the critical delay separating these two binaural phenomena was found to be 7–10 ms. It was suggested that the critical delay refers to a temporal window that is employed......, resulting in a critical delay of about 2–3 ms for 20-ms-long stimuli. Hence, for very short stimuli the temporal window or critical delay exhibits values similar to the auditory temporal resolution as, for instance, observed in gap-detection tasks. It is suggested that the larger critical delay observed...

  2. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  3. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Science.gov (United States)

    Brown, David J; Simpson, Andrew J R; Proulx, Michael J

    2015-01-01

    A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don't yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  4. Bilateral sudden sensorineural hearing loss following unilateral temporal bone fracture.

    Science.gov (United States)

    Hunchaisri, Niran

    2009-06-01

    Temporal bone fractures usually cause unilateral sensorineural hearing loss (SNHL) by fracture that violated otic capsule of that side. Bilateral SNHL from unilateral temporal bone fracture were rarely seen. Labyrinthine concussion was considered to be the pathogenesis in these cases. This article reports an additional case of bilateral SNHL from unilateral temporal bone fracture but in a different pattern of SNHL which may result from an occlusion of the internal auditory artery.

  5. Temporal Variations And Spectral Properties of Be/X-ray Pulsar GRO J1008-57 Studied by INTEGRAL

    CERN Document Server

    Wang, Wei

    2013-01-01

    Spin period variations and hard X-ray spectral properties of Be/X-ray pulsar GRO J1008-57 are studied with INTEGRAL observations. The pulsation periods at 93.66 s in 2004 and at 93.73 s in 2009 are determined. Pulse profiles of GRO J1008-57 during outbursts are energy dependent: double-peak profile in soft bands of 3-7 keV and single-peak profile in hard X-rays above 7 keV. GRO J1008-57 has undergone a spin-down trend from 1993-2009 with a rate of 4.1x10^-5 s day^-1, and might transfer into a spin-up trend after 2009. The spectra from 3-100 keV of GRO J1008-57 during outbursts are fitted with photon index of 1.4 and cutoff energies of 23-29 keV. We find a relatively soft spectrum in early phase of the 2009 outburst with cutoff energy 13 keV. Above a hard X-ray flux of 10^-9 erg cm^-2 s^-1, the spectra of GRO J1008-57 during outbursts need an enhanced hydrogen absorption of column density 6x10^22 cm^-2. The observed dip-like pulse profile of GRO J1008-57 in soft X-ray bands should be caused by this intrinsic a...

  6. Feedbacks between managed irrigation and water availability: Diagnosing temporal and spatial patterns using an integrated hydrologic model

    Science.gov (United States)

    Condon, Laura E.; Maxwell, Reed M.

    2014-03-01

    Groundwater-fed irrigation has been shown to deplete groundwater storage, decrease surface water runoff, and increase evapotranspiration. Here we simulate soil moisture-dependent groundwater-fed irrigation with an integrated hydrologic model. This allows for direct consideration of feedbacks between irrigation demand and groundwater depth. Special attention is paid to system dynamics in order to characterized spatial variability in irrigation demand and response to increased irrigation stress. A total of 80 years of simulation are completed for the Little Washita Basin in Southwestern Oklahoma, USA spanning a range of agricultural development scenarios and management practices. Results show regionally aggregated irrigation impacts consistent with other studies. However, here a spectral analysis reveals that groundwater-fed irrigation also amplifies the annual streamflow cycle while dampening longer-term cyclical behavior with increased irrigation during climatological dry periods. Feedbacks between the managed and natural system are clearly observed with respect to both irrigation demand and utilization when water table depths are within a critical range. Although the model domain is heterogeneous with respect to both surface and subsurface parameters, relationships between irrigation demand, water table depth, and irrigation utilization are consistent across space and between scenarios. Still, significant local heterogeneities are observed both with respect to transient behavior and response to stress. Spatial analysis of transient behavior shows that farms with groundwater depths within a critical depth range are most sensitive to management changes. Differences in behavior highlight the importance of groundwater's role in system dynamics in addition to water availability.

  7. Higher dietary diversity is related to better visual and auditory sustained attention.

    Science.gov (United States)

    Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed

    2016-04-01

    Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (Psustained attention.

  8. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  9. Auditory display as a prosthetic hand sensory feedback for reaching and grasping tasks.

    Science.gov (United States)

    Gonzalez, Jose; Suzuki, Hiroyuki; Natsumi, Nakayama; Sekine, Masashi; Yu, Wenwei

    2012-01-01

    Upper limb amputees have to rely extensively on visual feedback in order to monitor and manipulate successfully their prosthetic device. This situation leads to high consciousness burden, which generates fatigue and frustration. Therefore, in order to enhance motor-sensory performance and awareness, an auditory display was used as a sensory feedback system for the prosthetic hand's spatio-temporal and force information in a complete reaching and grasping setting. The main objective of this study was to explore the effects of using the auditory display to monitor the prosthetic hand during a complete reaching and grasping motion. The results presented in this paper point out that the usage of an auditory display to monitor and control a robot hand improves the temporal and grasping performance greatly, while reducing mental effort and improving their confidence.

  10. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners......) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...

  11. Peripheral auditory processing and speech reception in impaired hearing

    DEFF Research Database (Denmark)

    Strelcyk, Olaf

    One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even i....... Overall, this work provides insights into factors affecting auditory processing in listeners with impaired hearing and may have implications for future models of impaired auditory signal processing as well as advanced compensation strategies....... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS...

  12. (Central Auditory Processing: the impact of otitis media

    Directory of Open Access Journals (Sweden)

    Leticia Reis Borges

    2013-07-01

    Full Text Available OBJECTIVE: To analyze auditory processing test results in children suffering from otitis media in their first five years of age, considering their age. Furthermore, to classify central auditory processing test findings regarding the hearing skills evaluated. METHODS: A total of 109 students between 8 and 12 years old were divided into three groups. The control group consisted of 40 students from public school without a history of otitis media. Experimental group I consisted of 39 students from public schools and experimental group II consisted of 30 students from private schools; students in both groups suffered from secretory otitis media in their first five years of age and underwent surgery for placement of bilateral ventilation tubes. The individuals underwent complete audiological evaluation and assessment by Auditory Processing tests. RESULTS: The left ear showed significantly worse performance when compared to the right ear in the dichotic digits test and pitch pattern sequence test. The students from the experimental groups showed worse performance when compared to the control group in the dichotic digits test and gaps-in-noise. Children from experimental group I had significantly lower results on the dichotic digits and gaps-in-noise tests compared with experimental group II. The hearing skills that were altered were temporal resolution and figure-ground perception. CONCLUSION: Children who suffered from secretory otitis media in their first five years and who underwent surgery for placement of bilateral ventilation tubes showed worse performance in auditory abilities, and children from public schools had worse results on auditory processing tests compared with students from private schools.

  13. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  14. Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters

    Science.gov (United States)

    Smith, Nicholas A.; Trainor, Laurel J.

    2011-01-01

    This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…

  15. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    Directory of Open Access Journals (Sweden)

    Lena-Vanessa eDollezal

    2014-06-01

    Full Text Available Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI. SAM tone parameters were chosen to evoke an integrated (1-stream, a segregated (2-stream or an ambiguous percept by adjusting the fmod difference between A and B tones (∆fmod. The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on ∆fmod between A and B SAM tones. The effect of ∆fmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of

  16. Speed on the dance floor: Auditory and visual cues for musical tempo.

    Science.gov (United States)

    London, Justin; Burger, Birgitta; Thompson, Marc; Toiviainen, Petri

    2016-02-01

    Musical tempo is most strongly associated with the rate of the beat or "tactus," which may be defined as the most prominent rhythmic periodicity present in the music, typically in a range of 1.67-2 Hz. However, other factors such as rhythmic density, mean rhythmic inter-onset interval, metrical (accentual) structure, and rhythmic complexity can affect perceived tempo (Drake, Gros, & Penel, 1999; London, 2011 Drake, Gros, & Penel, 1999; London, 2011). Visual information can also give rise to a perceived beat/tempo (Iversen, et al., 2015), and auditory and visual temporal cues can interact and mutually influence each other (Soto-Faraco & Kingstone, 2004; Spence, 2015). A five-part experiment was performed to assess the integration of auditory and visual information in judgments of musical tempo. Participants rated the speed of six classic R&B songs on a seven point scale while observing an animated figure dancing to them. Participants were presented with original and time-stretched (±5%) versions of each song in audio-only, audio+video (A+V), and video-only conditions. In some videos the animations were of spontaneous movements to the different time-stretched versions of each song, and in other videos the animations were of "vigorous" versus "relaxed" interpretations of the same auditory stimulus. Two main results were observed. First, in all conditions with audio, even though participants were able to correctly rank the original vs. time-stretched versions of each song, a song-specific tempo-anchoring effect was observed, such that sped-up versions of slower songs were judged to be faster than slowed-down versions of faster songs, even when their objective beat rates were the same. Second, when viewing a vigorous dancing figure in the A+V condition, participants gave faster tempo ratings than from the audio alone or when viewing the same audio with a relaxed dancing figure. The implications of this illusory tempo percept for cross-modal sensory integration and

  17. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  18. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

    Science.gov (United States)

    Lee, HweeLing; Noppeney, Uta

    2011-08-01

    Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.

  19. The effectiveness of imagery and sentence strategy instructions as a function of visual and auditory processing in young school-age children.

    Science.gov (United States)

    Weed, K; Ryan, E B

    1985-12-01

    The relationship between auditory and visual processing modality and strategy instructions was examined in first- and second-grade children. A Pictograph Sentence Memory Test was used to determine dominant processing modality as well as to assess instructional effects. The pictograph task was given first followed by auditory or visual interference. Children who were disrupted more by visual interference were classed as visual processors and those more disrupted by auditory interference were classed as auditory processors. Auditory and visual processors were then assigned to one of three conditions: interactive imagery strategy, sentence strategy, or a control group. Children in the imagery and sentence strategy groups were briefly taught to integrate the pictographs in order to remember them better. The sentence strategy was found to be effective for both auditory and visual processors, whereas the interactive imagery strategy was effective only for auditory processors.

  20. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  1. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc.

  2. 辐射沙脊群时空信息集成系统设计%Design of Spatio-temporal Information Integration System for Sand Ridge Field

    Institute of Scientific and Technical Information of China (English)

    王鑫浩; 葛小平; 丁贤荣; 李森

    2015-01-01

    针对江苏沿海滩涂围垦开发的难点,在海洋时空数据模型的基础上探讨了辐射沙脊群时空信息集成的方法,初步尝试了准同步思想在时空数据管理中的应用。系统在.NET 开发平台下,结合 ArcGIS 二次开发、数据库编程,设计开发了 C /S 和 B/S 混合结构的辐射沙脊群时空信息集成系统。水文站管理、漂流浮标示踪、遥感影像检索3大关键模块与数据源、数据整编、数据库管理、模型库和业务表达5层互成经纬结构,实现数据交互查询和管理。系统主要功能包括数据整编、模型计算、数据分析、时空数据接收、成果展示和分享,为辐射沙脊群区域围垦开发及水文监测提供数据支撑和技术支持。%In view of the difficulties of exclamation and exploitation in the coastal area of Jiangsu Province,we dis-cussed the method of spatio-temporal information integration for radial sand ridges based on marine spatio-temporal data model,and preliminarily applied quasi-synchronization to the spatial and temporal data management.A system involving ArcGIS secondary development and database programming is designed and developed as C /S and B /S structure on the development platform of MicroSoft.Net.The system consists of hydrological station management, drifting buoy tracer,retrieval of remote sensing information as key modules,and data source,data reorganization, database management,model base and business expression as layers,which together make up a fabric structure of interactive data query and management.The main function of this system includes data reorganization,model calcu-lation,data analysis,spatio-temporal data receiving,result displaying and sharing.It provides data support and technical support for the reclamation exploitation and hydrological monitoring in radial sand ridge field.

  3. Mapping tropical forests and deciduous rubber plantations in Hainan Island, China by integrating PALSAR 25-m and multi-temporal Landsat images

    Science.gov (United States)

    Chen, Bangqian; Li, Xiangping; Xiao, Xiangming; Zhao, Bin; Dong, Jinwei; Kou, Weili; Qin, Yuanwei; Yang, Chuan; Wu, Zhixiang; Sun, Rui; Lan, Guoyu; Xie, Guishui

    2016-08-01

    Updated and accurate maps of tropical forests and industrial plantations, like rubber plantations, are essential for understanding carbon cycle and optimal forest management practices, but existing optical-imagery-based efforts are greatly limited by frequent cloud cover. Here we explored the potential utility of integrating 25-m cloud-free Phased Array type L-band Synthetic Aperture Radar (PALSAR) mosaic product and multi-temporal Landsat images to map forests and rubber plantations in Hainan Island, China. Based on structure information detected by PALSAR and yearly maximum Normalized Difference Vegetation Index (NDVI), we first identified and mapped forests with a producer accuracy (PA) of 96% and user accuracy (UA) of 98%. The resultant forest map showed reasonable spatial and areal agreements with the optical-based forest maps of Fine Resolution Observation and Monitoring Global Land Clover (FROM-GLC) and GlobalLand30. We then extracted rubber plantations from the forest map according to their deciduous features (using minimum Land Surface Water Index, LSWI) and rapid changes in canopies during Rubber Defoliation and Foliation (RDF) period (using standard deviation of LSWI) and dense canopy in growing season (using maximum NDVI). The rubber plantation map yielded a high accuracy when validated by ground truth-based data (PA/UA > 86%) and evaluated with three farm-scale rubber plantation maps (PA/UA >88%). It is estimated that in 2010, Hainan Island had 2.11 × 106 ha of forest and 5.15 × 105 ha of rubber plantations. This study has demonstrated the potential of integrating 25-m PALSAR-based structure information, and Landsat-based spectral and phenology information for mapping tropical forests and rubber plantations.

  4. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  5. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  6. Integrated remote sensing for multi-temporal analysis of anthropic activities in the south-east of Mt. Vesuvius National Park

    Science.gov (United States)

    Manzo, C.; Mei, A.; Fontinovo, G.; Allegrini, A.; Bassani, C.

    2016-10-01

    This work shows a downscaling approach for environmental changes study using multi- and hyper-spectral remote sensing data. The study area, located in the south-east of Mt. Vesuvius National Park, has been affected by two main activities during the last decades: mining and consecutive municipal solid waste dumping. These activities had an environmental impact in the neighbouring areas releasing dust and gaseous pollutants in the atmosphere and leachate into the ground. The approach integrated remote sensing data at different spectral and spatial resolutions. Landsat TM images were adopted to study the changes that occurred in the area using environmental indices at a wider temporal scale. In order to identify these indices in the study area, two high spatial and spectral resolution MIVIS aerial images were adopted. The first image, acquired in July 2004, describes the environmental situation after the anthropic activities of extraction and dumping in some sites, while the second image acquired in 2010 reflects the situation after the construction of new landfill in an old quarry. The spectral response of soil and vegetation was applied to interpret stress conditions and other environmental anomalies in the study areas. Some Warning Zones were defined by "core" and "neighbouring" of the anthropic area. Different classification methods were adopted in order to characterize the study area: Spectral Angle Mapper (SAM) classification provided local covers, while Linear Spectral Unmixing Analysis (LSMA) identified main fractions changes of vegetation, substrate and dark surfaces. The change detection of spectral indices, supported by thermal anomalies, highlighted potential stressed areas.

  7. Unanesthetized auditory cortex exhibits multiple codes for gaps in cochlear implant pulse trains.

    Science.gov (United States)

    Kirby, Alana E; Middlebrooks, John C

    2012-02-01

    Cochlear implant listeners receive auditory stimulation through amplitude-modulated electric pulse trains. Auditory nerve studies in animals demonstrate qualitatively different patterns of firing elicited by low versus high pulse rates, suggesting that stimulus pulse rate might influence the transmission of temporal information through the auditory pathway. We tested in awake guinea pigs the temporal acuity of auditory cortical neurons for gaps in cochlear implant pulse trains. Consistent with results using anesthetized conditions, temporal acuity improved with increasing pulse rates. Unlike the anesthetized condition, however, cortical neurons responded in the awake state to multiple distinct features of the gap-containing pulse trains, with the dominant features varying with stimulus pulse rate. Responses to the onset of the trailing pulse train (Trail-ON) provided the most sensitive gap detection at 1,017 and 4,069 pulse-per-second (pps) rates, particularly for short (25 ms) leading pulse trains. In contrast, under conditions of 254 pps rate and long (200 ms) leading pulse trains, a sizeable fraction of units demonstrated greater temporal acuity in the form of robust responses to the offsets of the leading pulse train (Lead-OFF). Finally, TONIC responses exhibited decrements in firing rate during gaps, but were rarely the most sensitive feature. Unlike results from anesthetized condi