WorldWideScience

Sample records for auditory temporal integration

  1. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  2. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  3. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  4. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  5. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  6. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  7. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  8. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  9. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  10. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  11. Auditory memory for temporal characteristics of sound.

    Science.gov (United States)

    Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike

    2008-05-01

    This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.

  12. Auditory temporal processing in patients with temporal lobe epilepsy.

    Science.gov (United States)

    Lavasani, Azam Navaei; Mohammadkhani, Ghassem; Motamedi, Mahmoud; Karimi, Leyla Jalilvand; Jalaei, Shohreh; Shojaei, Fereshteh Sadat; Danesh, Ali; Azimi, Hadi

    2016-07-01

    Auditory temporal processing is the main feature of speech processing ability. Patients with temporal lobe epilepsy, despite their normal hearing sensitivity, may present speech recognition disorders. The present study was carried out to evaluate the auditory temporal processing in patients with unilateral TLE. The present study was carried out on 25 patients with epilepsy: 11 patients with right temporal lobe epilepsy and 14 with left temporal lobe epilepsy with a mean age of 31.1years and 18 control participants with a mean age of 29.4years. The two experimental and control groups were evaluated via gap-in-noise and duration pattern sequence tests. One-way ANOVA was run to analyze the data. The mean of the threshold of the GIN test in the control group was observed to be better than that in participants with LTLE and RTLE. Also, it was observed that the percentage of correct responses on the DPS test in the control group and in participants with RTLE was better than that in participants with LTLE. Patients with TLE have difficulties in temporal processing. Difficulties are more significant in patients with LTLE, likely because the left temporal lobe is specialized for the processing of temporal information. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  14. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  15. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  16. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  17. EFFECTS OF PHYSICAL REHABILITATION INTEGRATED WITH RHYTHMIC AUDITORY STIMULATION ON SPATIO-TEMPORAL AND KINEMATIC PARAMETERS OF GAIT IN PARKINSON’S DISEASE

    Directory of Open Access Journals (Sweden)

    Massimiliano Pau

    2016-08-01

    Full Text Available Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD. In this context, the use of Rhythmic Auditory Stimulation (RAS has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns scarce information is available from a kinematic viewpoint. In this study we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of intensive rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4±11.1, Hoehn & Yahr 1-3. Gait kinematics was assessed before and at the end of the rehabilitation period and after a three-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively, which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion-extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  18. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  19. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.

    2007-01-01

    Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  20. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T R; Wiersinga-Post, J Esther C

    2007-01-01

    PURPOSE: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  1. Spectro-temporal characterization of auditory neurons: redundant or necessary?

    NARCIS (Netherlands)

    Eggermont, J.J.; Aertsen, A.M.H.J.; Hermes, D.J.; Johannesma, P.I.M.

    1981-01-01

    For neurons in the auditory midbrain of the grass frog the use of a combined spectro-temporal characterization has been evaluated against the separate characterizations of frequency-sensitivity and temporal response properties. By factoring the joint density function of stimulus intensity, I(f, t),

  2. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  4. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  5. Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback.

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  6. Forward Masking: Temporal Integration or Adaptation?

    DEFF Research Database (Denmark)

    Ewert, Stephan D.; Hau, Ole; Dau, Torsten

    2007-01-01

    and the physiological mechanisms of binaural processing in mammals; integration of the different stimulus features into auditory scene analysis; physiological mechanisms related to the formation of auditory objects; speech perception; and limitations of auditory perception resulting from hearing disorders....

  7. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  8. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    Science.gov (United States)

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  10. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  11. Temporal Organization of Sound Information in Auditory Memory

    OpenAIRE

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed ...

  12. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  13. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians

    Directory of Open Access Journals (Sweden)

    Kumar, Prawin

    2015-12-01

    Full Text Available Introduction Enhanced auditory perception in musicians is likely to result from auditory perceptual learning during several years of training and practice. Many studies have focused on biological processing of auditory stimuli among musicians. However, there is a lack of literature on temporal resolution and active auditory discrimination skills in vocal musicians. Objective The aim of the present study is to assess temporal resolution and active auditory discrimination skill in vocal musicians. Method The study participants included 15 vocal musicians with a minimum professional experience of 5 years of music exposure, within the age range of 20 to 30 years old, as the experimental group, while 15 age-matched non-musicians served as the control group. We used duration discrimination using pure-tones, pulse-train duration discrimination, and gap detection threshold tasks to assess temporal processing skills in both groups. Similarly, we assessed active auditory discrimination skill in both groups using Differential Limen of Frequency (DLF. All tasks were done using MATLab software installed in a personal computer at 40dBSL with maximum likelihood procedure. The collected data were analyzed using SPSS (version 17.0. Result Descriptive statistics showed better threshold for vocal musicians compared with non-musicians for all tasks. Further, independent t-test showed that vocal musicians performed significantly better compared with non-musicians on duration discrimination using pure tone, pulse train duration discrimination, gap detection threshold, and differential limen of frequency. Conclusion The present study showed enhanced temporal resolution ability and better (lower active discrimination threshold in vocal musicians in comparison to non-musicians.

  14. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  15. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  16. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    Science.gov (United States)

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  18. Processamento auditivo em indivíduos com epilepsia de lobo temporal Auditory processing in patients with temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Juliana Meneguello

    2006-08-01

    Full Text Available A epilepsia do lobo temporal ocasiona descargas elétricas excessivas onde a via auditiva tem sua estação final. É uma das formas mais comuns e de mais difícil controle da doença. O correto processamento dos estímulos auditivos necessita da integridade anatômica e funcional de todas as estruturas envolvidas na via auditiva. OBJETIVO: Verificar o Processamento Auditivo de pacientes portadores de epilepsia do lobo temporal quanto aos mecanismos de discriminação de sons em seqüência e de padrões tonais, discriminação da direção da fonte sonora e atenção seletiva para sons verbais e não-verbais. MÉTODO: Foram avaliados oito indivíduos com epilepsia do lobo temporal confirmada e com foco restrito a essa região, através dos testes auditivos especiais: Teste de Localização Sonora, Teste de Padrão de Duração, Teste Dicótico de Dígitos e Teste Dicótico Não-Verbal. O seu desempenho foi comparado ao de indivíduos sem alteração neurológica (estudo caso-controle. RESULTADO: Os sujeitos com epilepsia do lobo temporal apresentaram desempenho semelhante aos do grupo controle quanto ao mecanismo de discriminação da direção da fonte sonora e desempenho inferior quanto aos demais mecanismos avaliados. CONCLUSÃO: Indivíduos com epilepsia do lobo temporal apresentaram maior prejuízo no processamento auditivo que os sem danos corticais, de idades semelhantes.Temporal epilepsy, one of the most common presentation of this pathology, causes excessive electrical discharges in the area where we have the final station of the auditory pathway. Both the anatomical and functional integrity of the auditory pathway structures are essential for the correct processing of auditory stimuli. AIM: to check the Auditory Processing in patients with temporal lobe epilepsy regarding the auditory mechanisms of discrimination from sequential sounds and tone patterns, discrimination of the sound source direction and selective attention to verbal

  19. MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM

    DEFF Research Database (Denmark)

    Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.

    2007-01-01

    An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...... was tested in conditions of tone-in-noise masking, intensity discrimination, spectral masking with tones and narrowband noises, forward masking with (on- and off-frequency) noise- and pure-tone maskers, and amplitude modulation detection using different noise carrier bandwidths. One of the key properties...

  20. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Science.gov (United States)

    Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C

    2015-01-01

    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  1. Anatomical pathways for auditory memory II: Information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Directory of Open Access Journals (Sweden)

    Monica eMunoz-Lopez

    2015-05-01

    Full Text Available Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 minutes. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 seconds. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys’ auditory memory performance. It is possible, therefore, that the anatomical pathways differ. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC. We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG, and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY and anterograde (10% BDA 10,000 MW tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex, and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  2. Frontal and superior temporal auditory processing abnormalities in schizophrenia.

    Science.gov (United States)

    Chen, Yu-Han; Edgar, J Christopher; Huang, Mingxiong; Hunter, Michael A; Epstein, Emerson; Howell, Breannan; Lu, Brett Y; Bustillo, Juan; Miller, Gregory A; Cañive, José M

    2013-01-01

    Although magnetoencephalography (MEG) studies show superior temporal gyrus (STG) auditory processing abnormalities in schizophrenia at 50 and 100 ms, EEG and corticography studies suggest involvement of additional brain areas (e.g., frontal areas) during this interval. Study goals were to identify 30 to 130 ms auditory encoding processes in schizophrenia (SZ) and healthy controls (HC) and group differences throughout the cortex. The standard paired-click task was administered to 19 SZ and 21 HC subjects during MEG recording. Vector-based Spatial-temporal Analysis using L1-minimum-norm (VESTAL) provided 4D maps of activity from 30 to 130 ms. Within-group t-tests compared post-stimulus 50 ms and 100 ms activity to baseline. Between-group t-tests examined 50 and 100 ms group differences. Bilateral 50 and 100 ms STG activity was observed in both groups. HC had stronger bilateral 50 and 100 ms STG activity than SZ. In addition to the STG group difference, non-STG activity was also observed in both groups. For example, whereas HC had stronger left and right inferior frontal gyrus activity than SZ, SZ had stronger right superior frontal gyrus and left supramarginal gyrus activity than HC. Less STG activity was observed in SZ than HC, indicating encoding problems in SZ. Yet auditory encoding abnormalities are not specific to STG, as group differences were observed in frontal and SMG areas. Thus, present findings indicate that individuals with SZ show abnormalities in multiple nodes of a concurrently activated auditory network.

  3. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...

  4. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  5. Neural basis of the time window for subjective motor-auditory integration

    Directory of Open Access Journals (Sweden)

    Koichi eToida

    2016-01-01

    Full Text Available Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback of for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2 and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms, and hence reduction in the feeling of authorship of the sound (the sense of agency. In contrast, the enhanced-P2 was most prominent in short-delay (≤ 200 ms conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally-deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.

  6. Identification enhancement of auditory evoked potentials in EEG by epoch concatenation and temporal decorrelation.

    Science.gov (United States)

    Zavala-Fernandez, H; Orglmeister, R; Trahms, L; Sander, T H

    2012-12-01

    Event-related potentials (ERP) recorded by electroencephalography (EEG) are brain responses following an external stimulus, e.g., a sound or an image. They are used in fundamental cognitive research and neurological and psychiatric clinical research. ERPs are weaker than spontaneous brain activity and therefore it is difficult or even impossible to identify an ERP in the brain activity following an individual stimulus. For this reason, a blind source separation method relying on statistical information is proposed for the isolation of ERP after auditory stimulation. In this paper it is suggested to integrate epoch concatenation into the popular temporal decorrelation algorithm SOBI/TDSEP relying on time shifted correlations. With the proposed epoch concatenation temporal decorrelation (ecTD) algorithm a component representing the auditory evoked potential (AEP) is found in electroencephalographic data from an auditory stimulation experiment lasting 3min. The ecTD result is compared with the averaged AEP and it is superior to the result from the SOBI/TDSEP algorithm. Furthermore the ecTD processing leads to significant increases in the signal-to-noise ratio (shape SNR) of the AEP and reduces the computation time by 50% if compared to the SOBI/TDSEP calculation. It can be concluded that data concatenation in combination with temporal decorrelation is useful for isolating and improving the properties of an AEP especially in a short duration stimulation experiment. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study

    OpenAIRE

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    PURPOSE: To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise - GIN) and IQ, attention, memory and age measurements. METHOD: Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and ...

  8. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  9. The role of the temporal pole in modulating primitive auditory memory.

    Science.gov (United States)

    Liu, Zhiliang; Wang, Qian; You, Yu; Yin, Peng; Ding, Hu; Bao, Xiaohan; Yang, Pengcheng; Lu, Hao; Gao, Yayue; Li, Liang

    2016-04-21

    Primitive auditory memory (PAM), which is recognized as the early point in the chain of the transient auditory memory system, faithfully maintains raw acoustic fine-structure signals for up to 20-30 milliseconds. The neural mechanisms underlying PAM have not been reported in the literature. Previous anatomical, brain-imaging, and neurophysiological studies have suggested that the temporal pole (TP), part of the parahippocampal region in the transitional area between perirhinal cortex and superior/inferior temporal gyri, is involved in auditory memories. This study investigated whether the TP plays a role in mediating/modulating PAM. The longest interaural interval (the interaural-delay threshold) for detecting a break in interaural correlation (BIC) embedded in interaurally correlated wideband noises was used to indicate the temporal preservation of PAM and examined in both healthy listeners and patients receiving unilateral anterior temporal lobectomy (ATL, centered on the TP) for treating their temporal lobe epilepsy (TLE). The results showed that patients with ATL were still able to detect the BIC even when an interaural interval was introduced, regardless of which ear was the leading one. However, in patient participants, the group-mean interaural-delay threshold for detecting the BIC under the contralateral-ear-leading (relative to the side of ATL) condition was significantly shorter than that under the ipsilateral-ear-leading condition. The results suggest that although the TP is not essential for integrating binaural signals and mediating the PAM, it plays a role in top-down modulating the PAM of raw acoustic fine-structure signals from the contralateral ear. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Segregation and integration of auditory streams when listening to multi-part music.

    Science.gov (United States)

    Ragert, Marie; Fairhurst, Merle T; Keller, Peter E

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams

  11. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  12. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input.

    Directory of Open Access Journals (Sweden)

    Max F K Happel

    Full Text Available Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex.

  13. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study.

    Science.gov (United States)

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (p<0.01, 75.6%). There were no significant correlations between the GIN test and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.

  14. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    Science.gov (United States)

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated

  15. Temporal integration of consecutive tones into synthetic vowels demonstrates perceptual assembly in audition

    NARCIS (Netherlands)

    Saija, Jefta D.; Andringa, Tjeerd C.; Başkent, Deniz; Akyürek, Elkan G.

    Temporal integration is the perceptual process combining sensory stimulation over time into longer percepts that can span over 10 times the duration of a minimally detectable stimulus. Particularly in the auditory domain, such "long-term" temporal integration has been characterized as a relatively

  16. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  18. Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex.

    Science.gov (United States)

    Phillips, D P; Farmer, M E

    1990-11-15

    This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.

  19. Listening to another sense: somatosensory integration in the auditory system.

    Science.gov (United States)

    Wu, Calvin; Stefanescu, Roxana A; Martel, David T; Shore, Susan E

    2015-07-01

    Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.

  20. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    Science.gov (United States)

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    Science.gov (United States)

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  2. The Role of Visual and Auditory Temporal Processing for Chinese Children with Developmental Dyslexia

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Wong, Simpson W. L.; Cheung, Him; Penney, Trevor B.; Ho, Connie S. -H.

    2008-01-01

    This study examined temporal processing in relation to Chinese reading acquisition and impairment. The performances of 26 Chinese primary school children with developmental dyslexia on tasks of visual and auditory temporal order judgement, rapid naming, visual-orthographic knowledge, morphological, and phonological awareness were compared with…

  3. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  4. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  5. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  6. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  7. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  8. Temporal processing and long-latency auditory evoked potential in stutterers.

    Science.gov (United States)

    Prestes, Raquel; de Andrade, Adriana Neves; Santos, Renata Beatriz Fernandes; Marangoni, Andrea Tortosa; Schiefer, Ana Maria; Gil, Daniela

    Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n=20) and non-stutters (n=21), compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  9. Temporal processing and long-latency auditory evoked potential in stutterers

    Directory of Open Access Journals (Sweden)

    Raquel Prestes

    Full Text Available Abstract Introduction: Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. Objective: To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. Methods: The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n = 20 and non-stutters (n = 21, compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Results: Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Conclusion: Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components.

  10. The effects of context and musical training on auditory temporal-interval discrimination.

    Science.gov (United States)

    Banai, Karen; Fisher, Shirley; Ganot, Ron

    2012-02-01

    Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    Science.gov (United States)

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  12. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    Science.gov (United States)

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  13. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  14. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    Science.gov (United States)

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  16. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  17. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  18. High-order passive photonic temporal integrators.

    Science.gov (United States)

    Asghari, Mohammad H; Wang, Chao; Yao, Jianping; Azaña, José

    2010-04-15

    We experimentally demonstrate, for the first time to our knowledge, an ultrafast photonic high-order (second-order) complex-field temporal integrator. The demonstrated device uses a single apodized uniform-period fiber Bragg grating (FBG), and it is based on a general FBG design approach for implementing optimized arbitrary-order photonic passive temporal integrators. Using this same design approach, we also fabricate and test a first-order passive temporal integrator offering an energetic-efficiency improvement of more than 1 order of magnitude as compared with previously reported passive first-order temporal integrators. Accurate and efficient first- and second-order temporal integrations of ultrafast complex-field optical signals (with temporal features as fast as approximately 2.5ps) are successfully demonstrated using the fabricated FBG devices.

  19. Spectro-Temporal Methods in Primary Auditory Cortex

    National Research Council Canada - National Science Library

    Klein, David; Depireux, Didier; Simon, Jonathan; Shamma, Shihab

    2006-01-01

    .... This briefing examines Spike-Triggered Averaging. Spike-Triggered Averaging is an effective method to measure the STRF, when used with Temporally Orthogonal Ripple Combinations (TORCs) as stimuli...

  20. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Fibrous Dysplasia of the Temporal Bone with External Auditory Canal Stenosis and Secondary Cholesteatoma.

    Science.gov (United States)

    Liu, Yu-Hsi; Chang, Kuo-Ping

    2016-04-01

    Fibrous dysplasia is a slowly progressive benign fibro-osseous disease, rarely occurring in temporal bones. In these cases, most bony lesions developed from the bony part of the external auditory canals, causing otalgia, hearing impairment, otorrhea, and ear hygiene blockade and probably leading to secondary cholesteatoma. We presented the medical history of a 24-year-old woman with temporal monostotic fibrous dysplasia with secondary cholesteatoma. The initial presentation was unilateral conductive hearing loss. A hard external canal tumor contributing to canal stenosis and a near-absent tympanic membrane were found. Canaloplasty and type I tympanoplasty were performed, but the symptoms recurred after 5 years. She received canal wall down tympanomastoidectomy with ossciculoplasty at the second time, and secondary cholesteatoma in the middle ear was diagnosed. Fifteen years later, left otorrhea recurred again and transcanal endoscopic surgery was performed for middle ear clearance. Currently, revision surgeries provide a stable auditory condition, but her monostotic temporal fibrous dysplasia is still in place.

  2. Effects of tonotopicity, adaptation, modulation tuning, and temporal coherence in “primitive” auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2014-01-01

    ., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... asynchrony of spectral components, facilitating the listeners’ ability to segregate temporally overlapping sounds into separate auditory objects. Overall, the modeling framework may be useful to study the contributions of bottom-up auditory features on “primitive” grouping, also in more complex acoustic...

  3. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...... of the rate of clicks in calls. The majority of neurons (85%) were selective for click rates, and this selectivity remained unchanged over sound levels 10 to 20 dB above threshold. Selective neurons give phasic, tonic, or adapting responses to tone bursts and click trains. Some algorithms that could compute...

  4. Task-dependent modulation of regions in the left temporal cortex during auditory sentence comprehension.

    Science.gov (United States)

    Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping

    2015-01-01

    Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Neural correlates of auditory short-term memory in rostral superior temporal cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2014-12-01

    Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Local field potential correlates of auditory working memory in primate dorsal temporal pole.

    Science.gov (United States)

    Bigelow, James; Ng, Chi-Wing; Poremba, Amy

    2016-06-01

    Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special

  7. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. The Effect of Auditory Cueing on the Spatial and Temporal Gait Coordination in Healthy Adults.

    Science.gov (United States)

    Almarwani, Maha; Van Swearingen, Jessie M; Perera, Subashan; Sparto, Patrick J; Brach, Jennifer S

    2017-12-27

    Walk ratio, defined as step length divided by cadence, indicates the coordination of gait. During free walking, deviation from the preferential walk ratio may reveal abnormalities of walking patterns. The purpose of this study was to examine the impact of rhythmic auditory cueing (metronome) on the neuromotor control of gait at different walking speeds. Forty adults (mean age 26.6 ± 6.0 years) participated in the study. Gait characteristics were collected using a computerized walkway. In the preferred walking speed, there was no significant difference in walk ratio between uncued (walk ratio = .0064 ± .0007 m/steps/min) and metronome-cued walking (walk ratio = .0064 ± .0007 m/steps/min; p = .791). A higher value of walk ratio at the slower speed was observed with metronome-cued (walk ratio = .0071 ± .0008 m/steps/min) compared to uncued walking (walk ratio = .0068 ± .0007 m/steps/min; p metronome-cued (walk ratio = .0060 ± .0009 m/steps/min) compared to uncued walking (walk ratio = .0062 ± .0009 m/steps/min; p = .005). In healthy adults, the metronome cues may become an attentional demanding task, and thereby disrupt the spatial and temporal integration of gait at nonpreferred speeds.

  9. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  10. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  11. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  12. Temporal Feature Integration for Music Organisation

    DEFF Research Database (Denmark)

    Meng, Anders

    2006-01-01

    This Ph.D. thesis focuses on temporal feature integration for music organisation. Temporal feature integration is the process of combining all the feature vectors of a given time-frame into a single new feature vector in order to capture relevant information in the frame. Several existing methods...... for handling sequences of features are formulated in the temporal feature integration framework. Two datasets for music genre classification have been considered as valid test-beds for music organisation. Human evaluations of these, have been obtained to access the subjectivity on the datasets. Temporal...... ranking' approach is proposed for ranking the short-time features at larger time-scales according to their discriminative power in a music genre classification task. The multivariate AR (MAR) model has been proposed for temporal feature integration. It effectively models local dynamical structure...

  13. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  15. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  16. Auditory temporal perceptual learning and transfer in Chinese-speaking children with developmental dyslexia.

    Science.gov (United States)

    Zhang, Manli; Xie, Weiyi; Xu, Yanzhi; Meng, Xiangzhi

    2018-03-01

    Perceptual learning refers to the improvement of perceptual performance as a function of training. Recent studies found that auditory perceptual learning may improve phonological skills in individuals with developmental dyslexia in alphabetic writing system. However, whether auditory perceptual learning could also benefit the reading skills of those learning the Chinese logographic writing system is, as yet, unknown. The current study aimed to investigate the remediation effect of auditory temporal perceptual learning on Mandarin-speaking school children with developmental dyslexia. Thirty children with dyslexia were screened from a large pool of students in 3th-5th grades. They completed a series of pretests and then were assigned to either a non-training control group or a training group. The training group worked on a pure tone duration discrimination task for 7 sessions over 2 weeks with thirty minutes per session. Post-tests immediately after training and a follow-up test 2 months later were conducted. Analyses revealed a significant training effect in the training group relative to non-training group, as well as near transfer to the temporal interval discrimination task and far transfer to phonological awareness, character recognition and reading fluency. Importantly, the training effect and all the transfer effects were stable at the 2-month follow-up session. Further analyses found that a significant correlation between character recognition performance and learning rate mainly existed in the slow learning phase, the consolidation stage of perceptual learning, and this effect was modulated by an individuals' executive function. These findings indicate that adaptive auditory temporal perceptual learning can lead to learning and transfer effects on reading performance, and shed further light on the potential role of basic perceptual learning in the remediation and prevention of developmental dyslexia. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Temporal feature integration for music genre classification

    DEFF Research Database (Denmark)

    Meng, Anders; Ahrendt, Peter; Larsen, Jan

    2007-01-01

    , but they capture neither the temporal dynamics nor dependencies among the individual feature dimensions. Here, a multivariate autoregressive feature model is proposed to solve this problem for music genre classification. This model gives two different feature sets, the diagonal autoregressive (DAR......) and multivariate autoregressive (MAR) features which are compared against the baseline mean-variance as well as two other temporal feature integration techniques. Reproducibility in performance ranking of temporal feature integration methods were demonstrated using two data sets with five and eleven music genres...

  18. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  19. Auditory event-related potentials in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Tomé, David; Sampaio, Mafalda; Mendes-Ribeiro, José; Barbosa, Fernando; Marques-Teixeira, João

    2014-12-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of idiopathic epilepsy, with onset from age 3 to 14 years. Although the prognosis for children with BECTS is excellent, some studies have revealed neuropsychological deficits in many domains, including language. Auditory event-related potentials (AERPs) reflect activation of different neuronal populations and are suggested to contribute to the evaluation of auditory discrimination (N1), attention allocation and phonological categorization (N2), and echoic memory (mismatch negativity--MMN). The scarce existing literature about this theme motivated the present study, which aims to investigate and document the existing AERP changes in a group of children with BECTS. AERPs were recorded, during the day, to pure and vocal tones and in a conventional auditory oddball paradigm in five children with BECTS (aged 8-12; mean=10 years; male=5) and in six gender and age-matched controls. Results revealed high amplitude of AERPs for the group of children with BECTS with a slight latency delay more pronounced in fronto-central electrodes. Children with BECTS may have abnormal central auditory processing, reflected by electrophysiological measures such as AERPs. In advance, AERPs seem a good tool to detect and reliably reveal cortical excitability in children with typical BECTS. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Prepulse Inhibition of Auditory Cortical Responses in the Caudolateral Superior Temporal Gyrus in Macaca mulatta.

    Science.gov (United States)

    Chen, Zuyue; Parkkonen, Lauri; Wei, Jingkuan; Dong, Jin-Run; Ma, Yuanye; Carlson, Synnöve

    2018-04-01

    Prepulse inhibition (PPI) refers to a decreased response to a startling stimulus when another weaker stimulus precedes it. Most PPI studies have focused on the physiological startle reflex and fewer have reported the PPI of cortical responses. We recorded local field potentials (LFPs) in four monkeys and investigated whether the PPI of auditory cortical responses (alpha, beta, and gamma oscillations and evoked potentials) can be demonstrated in the caudolateral belt of the superior temporal gyrus (STGcb). We also investigated whether the presence of a conspecific, which draws attention away from the auditory stimuli, affects the PPI of auditory cortical responses. The PPI paradigm consisted of Pulse-only and Prepulse + Pulse trials that were presented randomly while the monkey was alone (ALONE) and while another monkey was present in the same room (ACCOMP). The LFPs to the Pulse were significantly suppressed by the Prepulse thus, demonstrating PPI of cortical responses in the STGcb. The PPI-related inhibition of the N1 amplitude of the evoked responses and cortical oscillations to the Pulse were not affected by the presence of a conspecific. In contrast, gamma oscillations and the amplitude of the N1 response to Pulse-only were suppressed in the ACCOMP condition compared to the ALONE condition. These findings demonstrate PPI in the monkey STGcb and suggest that the PPI of auditory cortical responses in the monkey STGcb is a pre-attentive inhibitory process that is independent of attentional modulation.

  1. Temporal precision and the capacity of auditory-verbal short-term memory.

    Science.gov (United States)

    Gilbert, Rebecca A; Hitch, Graham J; Hartley, Tom

    2017-12-01

    The capacity of serially ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel "rehearsal-probe" task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval. After an unpredictable delay, a tone prompted report of the item being rehearsed at that moment. An initial experiment showed cyclic distributions of item responses over time, with peaks preserving serial order and broad, overlapping tails. The spread of the response distributions increased with additional memory load and correlated negatively with participants' auditory digit spans. A second study replicated the negative correlation and demonstrated its specificity to AVSTM by controlling for differences in visuo-spatial STM and nonverbal IQ. The results are consistent with the idea that a common resource underpins both the temporal precision and capacity of AVSTM. The rehearsal-probe task may provide a valuable tool for investigating links between temporal processing and AVSTM capacity in the context of speech and language abilities.

  2. A phenomenological model of the electrically stimulated auditory nerve fiber: temporal and biphasic response properties

    Directory of Open Access Journals (Sweden)

    Colin eHorne

    2016-02-01

    Full Text Available We present a phenomenological model of electrically stimulated auditory nerve fibers (ANFs. The model reproduces the probabilistic and temporal properties of the ANF response to both monophasic and biphasic stimuli, in isolation. The main contribution of the model lies in its ability to reproduce statistics of the ANF response (mean latency, jitter, and firing probability under both monophasic and cathodic-anodic biphasic stimulation, without changing the model’s parameters. The response statistics of the model depend on stimulus level and duration of the stimulating pulse, reproducing trends observed in the ANF. In the case of biphasic stimulation, the model reproduces the effects of pseudomonophasic pulse shapes and also the dependence on the interphase gap (IPG of the stimulus pulse, an effect that is quantitatively reproduced. The model is fitted to ANF data using a procedure that uniquely determines each model parameter. It is thus possible to rapidly parameterize a large population of neurons to reproduce a given set of response statistic distributions.Our work extends the stochastic leaky integrate and fire (SLIF neuron, a well-studied phenomenological model of the electrically stimulated neuron. We extend the SLIF neuron so as to produce a realistic latency distribution by delaying the moment of spiking. During this delay, spiking may be abolished by anodic current. By this means, the probability of the model neuron responding to a stimulus is reduced when a trailing phase of opposite polarity is introduced. By introducing a minimum wait period that must elapse before a spike may be emitted, the model is able to reproduce the differences in the threshold level observed in the ANF for monophasic and biphasic stimuli. Thus, the ANF response to a large variety of pulse shapes are reproduced correctly by this model.

  3. Top-Down Modulation of Auditory-Motor Integration during Speech Production: The Role of Working Memory.

    Science.gov (United States)

    Guo, Zhiqiang; Wu, Xiuqin; Li, Weifeng; Jones, Jeffery A; Yan, Nan; Sheft, Stanley; Liu, Peng; Liu, Hanjun

    2017-10-25

    Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes. SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study

  4. Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions.

    Science.gov (United States)

    van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean

    2017-04-15

    A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  7. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria

    2016-09-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." © The Author 2016. Published by Oxford University Press.

  8. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    Science.gov (United States)

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  9. Auditory processing, speech perception and phonological ability in pre-school children at high-risk for dyslexia: a longitudinal study of the auditory temporal processing theory

    OpenAIRE

    Boets, Bart; Wouters, Jan; Van Wieringen, Astrid; Ghesquière, Pol

    2007-01-01

    This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first gra...

  10. Evidence for a neurophysiologic auditory deficit in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Liasis, A; Bamiou, D E; Boyd, S; Towell, A

    2006-07-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of epilepsy. Recent studies have questioned the benign nature of BECTS, as they have revealed neuropsychological deficits in many domains including language. The aim of this study was to investigate whether the epileptic discharges during the night have long-term effects on auditory processing, as reflected on electrophysiological measures, during the day, which could underline the language deficits. In order to address these questions we recorded base line electroencephalograms (EEG), sleep EEG and auditory event related potentials in 12 children with BECTS and in age- and gender-matched controls. In the children with BECTS, 5 had unilateral and 3 had bilateral spikes. In the 5 patients with unilateral spikes present during sleep, an asymmetry of the auditory event related component (P85-120) was observed contralateral to the side of epileptiform activity compared to the normal symmetrical vertex distribution that was noted in all controls and in 3 the children with bilateral spikes. In all patients the peak to peak amplitude of this event related potential component was statistically greater compared to the controls. Analysis of subtraction waveforms (deviant - standard) revealed no evidence of a mismatch negativity component in any of the children with BECTS. We propose that the abnormality of P85-120 and the absence of mismatch negativity during wake recordings in this group may arise in response to the long-term effects of spikes occurring during sleep, resulting in disruption of the evolution and maintenance of echoic memory traces. These results may indicate that patients with BECTS have abnormal processing of auditory information at a sensory level ipsilateral to the hemisphere evoking spikes during sleep.

  11. Path integral quantization in the temporal gauge

    International Nuclear Information System (INIS)

    Scholz, B.; Steiner, F.

    1983-06-01

    The quantization of non-Abelian gauge theories in the temporal gauge is studied within Feynman's path integral approach. The standard asymptotic boundary conditions are only imposed on the transverse gauge fields. The fictituous longitudinal gauge quanta are eliminated asymptotically by modified boundary conditions. This abolishes the residual time-independent gauge transformations and leads to a unique fixing of the temporal gauge. The resulting path integral for the generating functional respects automatically Gauss's law. The correct gauge field propagator is derived. It does not suffer from gauge singularities at n x k = 0 present in the usual treatment of axial gauges. The standard principal value prescription does not work. As a check, the Wilson loop in temporal gauge is calculated with the new propagator. To second order (and to all orders in the Abelian case) the result agrees with the one obtained in the Feynman and Coulomb gauge. (orig.)

  12. Congenital external auditory canal atresia and stenosis: temporal bone CT findings

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Hoon; Kim, Bum Soo; Jung, So Lyung; Kim, Young Joo; Chun, Ho Jong; Choi, Kyu Ho; Park, Shi Nae [College of Medicine, Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2002-04-01

    To determine the computed tomographic (CT) findings of atresia and stenosis of the external auditory canal (EAC), and to describe associated abnormalities in surrounding structures. We retrospectively reviewed the axial and coronal CT images of the temporal bone in 15 patients (M:F=8:7;mean age, 15.8 years) with 16 cases of EAC atresia (unilateral n=11, bilateral n=1) and EAC stenosis (unilateral n=3). Associated abnormalities of the EAC, tympanic cavity, ossicles, mastoid air cells, eustachian tube, facial nerve course, mandibular condyle and condylar fossa, sigmoid sinus and jugular bulb, and the base of the middle cranial fossa were evaluated. Thirteen cases of bony EAC atresia (one bilateral), with an atretic bony plate, were noted, and one case of unilateral membranous atresia, in which a soft tissue the EAC. A unilateral lesion occurred more frequently on the right temporal bone (n=8, 73%). Associated abnormalities included a small tympanic cavity (n=8, 62%), decreased mastoid pneumatization (n=8, 62%), displacement of the mandibular condyle and the posterior wall of the condylar fossa (n=7, 54%), dilatation of the Eustachian tube (n=7, 54%), and inferior displacement of the temporal fossa base (n=8, 62%). Abnormalities of ossicles were noted in the malleolus (n=12, 92%), incus (n=10, 77%) and stapes (n=6, 46%). The course of the facial nerve was abnormal in four cases, and abnormality of the auditory canal was noted in one. Among three cases of EAC stenosis, ossicular aplasia was observed in one, and in another the location of the mandibular condyle and condylar fossa was abnormal. In the remaining case there was no associated abnormality. Atresia of the EAC is frequently accompanied by abnormalities of the middle ear cavity, ossicles, and adjacent structures other than the inner ear. For patients with atresia and stenosis of this canal, CT of the temporal bone is essentially helpful in evaluating these associated abnormalities.

  13. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Science.gov (United States)

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  14. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  15. Temporal correlation between auditory neurons and the hippocampal theta rhythm induced by novel stimulations in awake guinea pigs.

    Science.gov (United States)

    Liberman, Tamara; Velluti, Ricardo A; Pedemonte, Marisa

    2009-11-17

    The hippocampal theta rhythm is associated with the processing of sensory systems such as touch, smell, vision and hearing, as well as with motor activity, the modulation of autonomic processes such as cardiac rhythm, and learning and memory processes. The discovery of temporal correlation (phase locking) between the theta rhythm and both visual and auditory neuronal activity has led us to postulate the participation of such rhythm in the temporal processing of sensory information. In addition, changes in attention can modify both the theta rhythm and the auditory and visual sensory activity. The present report tested the hypothesis that the temporal correlation between auditory neuronal discharges in the inferior colliculus central nucleus (ICc) and the hippocampal theta rhythm could be enhanced by changes in sensory stimulation. We presented chronically implanted guinea pigs with auditory stimuli that varied over time, and recorded the auditory response during wakefulness. It was observed that the stimulation shifts were capable of producing the temporal phase correlations between the theta rhythm and the ICc unit firing, and they differed depending on the stimulus change performed. Such correlations disappeared approximately 6 s after the change presentation. Furthermore, the power of the hippocampal theta rhythm increased in half of the cases presented with a stimulation change. Based on these data, we propose that the degree of correlation between the unitary activity and the hippocampal theta rhythm varies with--and therefore may signal--stimulus novelty.

  16. Mechanisms of spectral and temporal integration in the mustached bat inferior colliculus

    Science.gov (United States)

    Wenstrup, Jeffrey James; Nataraj, Kiran; Sanchez, Jason Tait

    2012-01-01

    This review describes mechanisms and circuitry underlying combination-sensitive response properties in the auditory brainstem and midbrain. Combination-sensitive neurons, performing a type of auditory spectro-temporal integration, respond to specific, properly timed combinations of spectral elements in vocal signals and other acoustic stimuli. While these neurons are known to occur in the auditory forebrain of many vertebrate species, the work described here establishes their origin in the auditory brainstem and midbrain. Focusing on the mustached bat, we review several major findings: (1) Combination-sensitive responses involve facilitatory interactions, inhibitory interactions, or both when activated by distinct spectral elements in complex sounds. (2) Combination-sensitive responses are created in distinct stages: inhibition arises mainly in lateral lemniscal nuclei of the auditory brainstem, while facilitation arises in the inferior colliculus (IC) of the midbrain. (3) Spectral integration underlying combination-sensitive responses requires a low-frequency input tuned well below a neuron's characteristic frequency (ChF). Low-ChF neurons in the auditory brainstem project to high-ChF regions in brainstem or IC to create combination sensitivity. (4) At their sites of origin, both facilitatory and inhibitory combination-sensitive interactions depend on glycinergic inputs and are eliminated by glycine receptor blockade. Surprisingly, facilitatory interactions in IC depend almost exclusively on glycinergic inputs and are largely independent of glutamatergic and GABAergic inputs. (5) The medial nucleus of the trapezoid body (MNTB), the lateral lemniscal nuclei, and the IC play critical roles in creating combination-sensitive responses. We propose that these mechanisms, based on work in the mustached bat, apply to a broad range of mammals and other vertebrates that depend on temporally sensitive integration of information across the audible spectrum. PMID:23109917

  17. Quantifying auditory temporal stability in a large database of recorded music.

    Science.gov (United States)

    Ellis, Robert J; Duan, Zhiyan; Wang, Ye

    2014-01-01

    "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature"), none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS)--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  18. Coding of auditory temporal and pitch information by hippocampal individual cells and cell assemblies in the rat.

    Science.gov (United States)

    Sakurai, Y

    2002-01-01

    This study reports how hippocampal individual cells and cell assemblies cooperate for neural coding of pitch and temporal information in memory processes for auditory stimuli. Each rat performed two tasks, one requiring discrimination of auditory pitch (high or low) and the other requiring discrimination of their duration (long or short). Some CA1 and CA3 complex-spike neurons showed task-related differential activity between the high and low tones in only the pitch-discrimination task. However, without exception, neurons which showed task-related differential activity between the long and short tones in the duration-discrimination task were always task-related neurons in the pitch-discrimination task. These results suggest that temporal information (long or short), in contrast to pitch information (high or low), cannot be coded independently by specific neurons. The results also indicate that the two different behavioral tasks cannot be fully differentiated by the task-related single neurons alone and suggest a model of cell-assembly coding of the tasks. Cross-correlation analysis among activities of simultaneously recorded multiple neurons supported the suggested cell-assembly model.Considering those results, this study concludes that dual coding by hippocampal single neurons and cell assemblies is working in memory processing of pitch and temporal information of auditory stimuli. The single neurons encode both auditory pitches and their temporal lengths and the cell assemblies encode types of tasks (contexts or situations) in which the pitch and the temporal information are processed.

  19. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  20. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    Science.gov (United States)

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  1. Auditory training changes temporal lobe connectivity in 'Wernicke's aphasia': a randomised trial.

    Science.gov (United States)

    Woodhead, Zoe Vj; Crinion, Jennifer; Teki, Sundeep; Penny, Will; Price, Cathy J; Leff, Alexander P

    2017-07-01

    Aphasia is one of the most disabling sequelae after stroke, occurring in 25%-40% of stroke survivors. However, there remains a lack of good evidence for the efficacy or mechanisms of speech comprehension rehabilitation. This within-subjects trial tested two concurrent interventions in 20 patients with chronic aphasia with speech comprehension impairment following left hemisphere stroke: (1) phonological training using 'Earobics' software and (2) a pharmacological intervention using donepezil, an acetylcholinesterase inhibitor. Donepezil was tested in a double-blind, placebo-controlled, cross-over design using block randomisation with bias minimisation. The primary outcome measure was speech comprehension score on the comprehensive aphasia test. Magnetoencephalography (MEG) with an established index of auditory perception, the mismatch negativity response, tested whether the therapies altered effective connectivity at the lower (primary) or higher (secondary) level of the auditory network. Phonological training improved speech comprehension abilities and was particularly effective for patients with severe deficits. No major adverse effects of donepezil were observed, but it had an unpredicted negative effect on speech comprehension. The MEG analysis demonstrated that phonological training increased synaptic gain in the left superior temporal gyrus (STG). Patients with more severe speech comprehension impairments also showed strengthening of bidirectional connections between the left and right STG. Phonological training resulted in a small but significant improvement in speech comprehension, whereas donepezil had a negative effect. The connectivity results indicated that training reshaped higher order phonological representations in the left STG and (in more severe patients) induced stronger interhemispheric transfer of information between higher levels of auditory cortex.Clinical trial registrationThis trial was registered with EudraCT (2005-004215-30, https

  2. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  3. Comparison of auditory temporal resolution between monolingual Persian and bilingual Turkish-Persian individuals.

    Science.gov (United States)

    Omidvar, Shaghayegh; Jafari, Zahra; Tahaei, Ali Akbar; Salehi, Masoud

    2013-04-01

    The aims of this study were to prepare a Persian version of the temporal resolution test using the method of Phillips et al (1994) and Stuart and Phillips (1996), and to compare the word-recognition performance in the presence of continuous and interrupted noise as well as the temporal resolution abilities between monolingual (ML) Persian and bilingual (BL) Turkish-Persian young adults. Word-recognition scores (WRSs) were obtained in quiet and in the presence of background competing continuous and interrupted noise at signal-to-noise ratios (SNRs) of -20, -10, 0, and 10 dB. Two groups of 33 ML Persian and 36 BL Turkish-Persian volunteers participated. WRSs significantly differed between ML and BL subjects at four sensation levels in the presence of continuous and interrupted noise. However, the difference in the release from masking between ML and BL subjects was not significant at the studied SNRs. BL Turkish-Persian listeners seem to show poorer performance when responding to Persian words in continuous and interrupted noise. However, bilingualism may not affect auditory temporal resolution ability.

  4. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT).

    Science.gov (United States)

    Mohammad Esmaeilzadeh, Sahar; Sharifi, Shahla; Tayarani Niknezhad, Hamid

    2013-09-01

    Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT) is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT) and play therapy (PT). There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children. In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected. VT, MT, and PT enhance children's communication and language skills from an early age. Each method has a meaningful impact on hearing loss, so by integrating them we have a comprehensive method in order to facilitate communication and language learning. To achieve this goal, the article offers methods and techniques to perform AVT and MT integrated with PT leading to an approach which offers all advantages of these three types of therapy.

  5. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten

    2007-01-01

    Periodic amplitude modulations AMs of an acoustic stimulus are presumed to be encoded in temporal activity patterns of neurons in the cochlear nucleus. Physiological recordings indicate that this temporal AM code is transformed into a rate-based periodicity code along the ascending auditory pathw...... accounts for the encoding of AM depth over a large dynamic range and for modulation frequency selective processing of complex sounds....

  6. Morphometrical Study of the Temporal Bone and Auditory Ossicles in Guinea Pig

    Directory of Open Access Journals (Sweden)

    Ahmadali Mohammadpour

    2011-03-01

    Full Text Available In this research, anatomical descriptions of the structure of the temporal bone and auditory ossicles have been performed based on dissection of ten guinea pigs. The results showed that, in guinea pig temporal bone was similar to other animals and had three parts; squamous, tympanic and petrous .The tympanic part was much better developed and consisted of oval shaped tympanic bulla with many recesses in tympanic cavity. The auditory ossicles of guinea pig concluded of three small bones; malleus, incus and stapes but the head of the malleus and the body of incus were fused and forming a malleoincudal complex. The average of morphometric parameters showed that the malleus was 3.53 ± 0.22 mm in total length. In addition to head and handle, the malleus had two distinct process; lateral and muscular. The incus had a total length 1.23 ± 0.02mm. It had long and short crus although the long crus was developed better than short crus. The lenticular bone was a round bone that articulated with the long crus of incus. The stapes had a total length 1.38 ± 0.04mm. The anterior crus(0.86 ± 0.08mm was larger than posterior crus (0.76 ± 0.08mm. It is concluded that, in the guinea pig, the malleus and the incus are fused, forming a junction called incus-malleus, while in the other animals these are separate bones. The stapes is larger and has a triangular shape and the anterior and posterior crus are thicker than other rodents. Therefore, for otological studies, the guinea pig is a good lab animal.

  7. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    Science.gov (United States)

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  8. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  9. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2017-05-01

    A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Decreased middle temporal gyrus connectivity in the language network in schizophrenia patients with auditory verbal hallucinations.

    Science.gov (United States)

    Zhang, Linchuan; Li, Baojuan; Wang, Huaning; Li, Liang; Liao, Qimei; Liu, Yang; Bao, Xianghong; Liu, Wenlei; Yin, Hong; Lu, Hongbing; Tan, Qingrong

    2017-07-13

    As the most common symptoms of schizophrenia, the long-term persistence of obstinate auditory verbal hallucinations (AVHs) brings about great mental pain to patients. Neuroimaging studies of schizophrenia have indicated that AVHs were associated with altered functional and structural connectivity within the language network. However, effective connectivity that could reflect directed information flow within this network and is of great importance to understand the neural mechanisms of the disorder remains largely unknown. In this study, we utilized stochastic dynamic causal modeling (DCM) to investigate directed connections within the language network in schizophrenia patients with and without AVHs. Thirty-six patients with schizophrenia (18 with AVHs and 18 without AVHs), and 37 healthy controls participated in the current resting-state functional magnetic resonance imaging (fMRI) study. The results showed that the connection from the left inferior frontal gyrus (LIFG) to left middle temporal gyrus (LMTG) was significantly decreased in patients with AVHs compared to those without AVHs. Meanwhile, the effective connection from the left inferior parietal lobule (LIPL) to LMTG was significantly decreased compared to the healthy controls. Our findings suggest aberrant pattern of causal interactions within the language network in patients with AVHs, indicating that the hypoconnectivity or disrupted connection from frontal to temporal speech areas might be critical for the pathological basis of AVHs. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT

    Directory of Open Access Journals (Sweden)

    Sahar Mohammad Esmaeilzadeh

    2013-10-01

    Full Text Available Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT and play therapy (PT. There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children.   Materials and Methods: In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected.    Results: Recent technologies have brought about great advancement in the field of hearing disorders. Now these impairments can be detected at birth, and in the majority of cases, hearing impaired children can develop fluent spoken language through audition. According to researches on the relationship between hearing impaired children’s communication and language skills and different approaches of therapy, it is known that learning through listening and

  12. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    Science.gov (United States)

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  13. Auditory temporal processing tests – Normative data for Polish-speaking adults

    Directory of Open Access Journals (Sweden)

    Joanna Majak

    2015-04-01

    Full Text Available Introduction: Several subjects exposed to neurotoxins in the workplace need to be assessed for central auditory deficit. Although central auditory processing tests are widely used in other countries, they have not been standardized for the Polish population. The aim of the study has been to evaluate the range of reference values for 3 temporal processing tests: the duration pattern test (DPT, the frequency pattern test (FPT and the gaps in noise test (GIN. Material and Methods: The study included 76 normal hearing individuals (38 women, 38 men at the age of 18 to 54 years old (mean ± standard deviation: 39.4±9.1. All study participants had no history of any chronic disease and underwent a standard ENT examination. Results: The reference range for the DPT was established at 55.3% or more of correct answers, while for the FPT it stood at 56.7% or more of correct answers. The mean threshold for both ears in the GIN test was defined as 6 ms. In this study there were no significant associations between the DPT, FPT and GIN results and age or gender. Symmetry between the ears in the case of the DPT, FPT and GIN was found. Conclusions: Reference ranges obtained in this study for the DPT and FPT in the Polish population are lower than reference ranges previously published for other nations while the GIN test results correspond to those published in the related literature. Further investigations are needed to explain the discrepancies between normative values in Poland and other countries and adapt tests for occupational medicine purposes. Med Pr 2015;66(2:145–152

  14. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.

    Directory of Open Access Journals (Sweden)

    Thibaud Necciari

    Full Text Available Many audio applications perform perception-based time-frequency (TF analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1 with standard model parameters (i.e. without efferents, (2 with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using

  15. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.

    Science.gov (United States)

    Necciari, Thibaud; Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard

    2016-01-01

    Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally

  16. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  17. How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices.

    Science.gov (United States)

    Twomey, Tae; Waters, Dafydd; Price, Cathy J; Evans, Samuel; MacSweeney, Mairéad

    2017-09-27

    To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants. SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was

  18. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    Science.gov (United States)

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  19. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency : An ERP study

    NARCIS (Netherlands)

    van Zuijen, Titia L.; Plakas, Anna; Maassen, Ben A. M.; Been, Pieter; Maurits, Natasha M.; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-01-01

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or

  20. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: An ERP study

    NARCIS (Netherlands)

    Van Zuijen, Titia L.; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M.; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-01-01

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or

  1. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  2. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...

  3. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    OpenAIRE

    Masahiro eKawasaki; Masahiro eKawasaki; Masahiro eKawasaki; Keiichi eKitajo; Keiichi eKitajo; Yoko eYamaguchi

    2014-01-01

    In humans, theta phase (4–8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from...

  4. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  5. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  6. The Role of Inhibition in a Computational Model of an Auditory Cortical Neuron during the Encoding of Temporal Information

    Science.gov (United States)

    Bendor, Daniel

    2015-01-01

    In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843

  7. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  8. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  9. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    Science.gov (United States)

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal

  10. Distinct Temporal Coordination of Spontaneous Population Activity between Basal Forebrain and Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Josue G. Yague

    2017-09-01

    Full Text Available The basal forebrain (BF has long been implicated in attention, learning and memory, and recent studies have established a causal relationship between artificial BF activation and arousal. However, neural ensemble dynamics in the BF still remains unclear. Here, recording neural population activity in the BF and comparing it with simultaneously recorded cortical population under both anesthetized and unanesthetized conditions, we investigate the difference in the structure of spontaneous population activity between the BF and the auditory cortex (AC in mice. The AC neuronal population show a skewed spike rate distribution, a higher proportion of short (≤80 ms inter-spike intervals (ISIs and a rich repertoire of rhythmic firing across frequencies. Although the distribution of spontaneous firing rate in the BF is also skewed, a proportion of short ISIs can be explained by a Poisson model at short time scales (≤20 ms and spike count correlations are lower compared to AC cells, with optogenetically identified cholinergic cell pairs showing exceptionally higher correlations. Furthermore, a smaller fraction of BF neurons shows spike-field entrainment across frequencies: a subset of BF neurons fire rhythmically at slow (≤6 Hz frequencies, with varied phase preferences to ongoing field potentials, in contrast to a consistent phase preference of AC populations. Firing of these slow rhythmic BF cells is correlated to a greater degree than other rhythmic BF cell pairs. Overall, the fundamental difference in the structure of population activity between the AC and BF is their temporal coordination, in particular their operational timescales. These results suggest that BF neurons slowly modulate downstream populations whereas cortical circuits transmit signals on multiple timescales. Thus, the characterization of the neural ensemble dynamics in the BF provides further insight into the neural mechanisms, by which brain states are regulated.

  11. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music

    Directory of Open Access Journals (Sweden)

    Niels R. Disbergen

    2018-03-01

    Full Text Available Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together, via a task requiring the detection of temporal modulations (i.e., triplets incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s. Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI environment. Experiment 1 subjects (N = 29, non-musicians completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen

  12. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music.

    Science.gov (United States)

    Disbergen, Niels R; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J

    2018-01-01

    Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects ( N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also

  13. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams

    Directory of Open Access Journals (Sweden)

    Yi-Huang eSu

    2014-12-01

    Full Text Available Both lower-level stimulus factors (e.g., temporal proximity and higher-level cognitive factors (e.g., content congruency are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently or upwards (incongruently to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  15. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

    Science.gov (United States)

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  16. A Novel Functional Magnetic Resonance Imaging Paradigm for the Preoperative Assessment of Auditory Perception in a Musician Undergoing Temporal Lobe Surgery.

    Science.gov (United States)

    Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J

    2018-03-01

    Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The role of auditory temporal cues in the fluency of stuttering adults

    OpenAIRE

    Furini, Juliana; Picoloto, Luana Altran; Marconato, Eduarda; Bohnen, Anelise Junqueira; Cardoso, Ana Claudia Vieira; Oliveira, Cristiane Moço Canhetti de

    2017-01-01

    ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF). Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG), and 15 without stuttering (Control Group - CG). The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds dela...

  18. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  19. Spectro-temporal analysis of complex tones: two cortical processes dependent on retention of sounds in the long auditory store.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L

    2000-09-01

    To examine whether two cortical processes concerned with spectro-temporal analysis of complex tones, a 'C-process' generating CN1 and CP2 potentials at cf. 100 and 180 ms after sudden change of pitch or timbre, and an 'M-process' generating MN1 and MP2 potentials of similar latency at the sudden cessation of repeated changes, are dependent on accumulation of a sound image in the long auditory store. The durations of steady (440 Hz) and rapidly oscillating (440-494 Hz, 16 changes/s) pitch of a synthesized 'clarinet' tone were reciprocally varied between 0.5 and 4.5 s within a duty cycle of 5 s. Potentials were recorded at the beginning and end of the period of oscillation in 10 non-attending normal subjects. The CN1 at the beginning of pitch oscillation and the MN1 at the end were both strongly influenced by the duration of the immediately preceding stimulus pattern, mean amplitudes being 3-4 times larger after 4.5 s as compared with 0.5 s. The processes responsible for both CN1 and MN1 are influenced by the duration of the preceding sound pattern over a period comparable to that of the 'echoic memory' or long auditory store. The store therefore appears to occupy a key position in spectro-temporal sound analysis. The C-process is concerned with the spectral structure of complex sounds, and may therefore reflect the 'grouping' of frequency components underlying auditory stream segregation. The M-process (mismatch negativity) is concerned with the temporal sound structure, and may play an important role in the extraction of information from sequential sounds.

  20. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    Science.gov (United States)

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  1. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  2. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  3. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  4. Pitting temporal against spatial integration in schizophrenic patients.

    Science.gov (United States)

    Herzog, Michael H; Brand, Andreas

    2009-06-30

    Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.

  5. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...

  6. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    International Nuclear Information System (INIS)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul

    2001-01-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  7. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)

    2001-12-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  8. Binding ‘when’ and ‘where’ impairs temporal, but not spatial recall in auditory and visual working memory

    Directory of Open Access Journals (Sweden)

    Franco eDelogu

    2012-03-01

    Full Text Available Information about where and when events happened seem naturally linked to each other, but only few studies have investigated whether and how they are associated in working memory. We tested whether the location of items and their temporal order are jointly or independently encoded. We also verified if spatio-temporal binding is influenced by the sensory modality of items. Participants were requested to memorize the location and/or the serial order of five items (environmental sounds or pictures sequentially presented from five different locations. Next, they were asked to recall either the item location or their order of presentation within the sequence. Attention during encoding was manipulated by contrasting blocks of trials in which participants were requested to encode only one feature, to blocks of trials where they had to encode both features. Results show an interesting interaction between task and attention. Accuracy in the serial order recall was affected by the simultaneous encoding of item location, whereas the recall of item location was unaffected by the concurrent encoding of the serial order of items. This asymmetric influence of attention on the two tasks was similar for the auditory and visual modality. Together, these data indicate that item location is processed in a relatively automatic fashion, whereas maintaining serial order is more demanding in terms of attention. The remarkably analogous results for auditory and visual memory performance, suggest that the binding of serial order and location in working memory is not modality-dependent, and may involve common intersensory mechanisms.

  9. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...

  10. Auditory properties in the parabelt regions of the superior temporal gyrus in the awake macaque monkey: an initial survey.

    Science.gov (United States)

    Kajikawa, Yoshinao; Frey, Stephen; Ross, Deborah; Falchier, Arnaud; Hackett, Troy A; Schroeder, Charles E

    2015-03-11

    The superior temporal gyrus (STG) is on the inferior-lateral brain surface near the external ear. In macaques, 2/3 of the STG is occupied by an auditory cortical region, the "parabelt," which is part of a network of inferior temporal areas subserving communication and social cognition as well as object recognition and other functions. However, due to its location beneath the squamous temporal bone and temporalis muscle, the STG, like other inferior temporal regions, has been a challenging target for physiological studies in awake-behaving macaques. We designed a new procedure for implanting recording chambers to provide direct access to the STG, allowing us to evaluate neuronal properties and their topography across the full extent of the STG in awake-behaving macaques. Initial surveys of the STG have yielded several new findings. Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes comparable to those of responses to 1/3 octave band-pass noise. Mapping results showed longer response latencies in more rostral sites and possible tonotopic patterns parallel to core and belt areas, suggesting the reversal of gradients between caudal and rostral parabelt areas. These results will help further exploration of parabelt areas. Copyright © 2015 the authors 0270-6474/15/354140-11$15.00/0.

  11. Processamento temporal, localização e fechamento auditivo em portadores de perda auditiva unilateral Temporal processing, localization and auditory closure in individuals with unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Regiane Nishihata

    2012-01-01

    , sound localization, and auditory closure, and to investigate possible associations with complaints of learning, communication and language difficulties in individuals with unilateral hearing loss. METHODS: Participants were 26 individuals with ages between 8 and 15 years, divided into two groups: Unilateral hearing loss group; and Normal hearing group. Each group was composed of 13 individuals, matched by gender, age and educational level. All subjects were submitted to anamnesis, peripheral hearing evaluation, and auditory processing evaluation through behavioral tests of sound localization, sequential memory, Random Detection Gap test, and speech-in-noise test. Nonparametric statistical tests were used to compare the groups, considering the presence or absence of hearing loss and the ear with hearing loss. RESULTS: Unilateral hearing loss started during preschool, and had unknown or identified etiologies, such as meningitis, traumas or mumps. Most individuals reported delays in speech, language and learning developments, especially those with hearing loss in the right ear. The group with hearing loss had worse responses in the abilities of temporal ordering and resolution, sound localization and auditory closure. Individuals with hearing loss in the left ear showed worse results than those with hearing loss in the right ear in all abilities, except in sound localization. CONCLUSION: The presence of unilateral hearing loss causes sound localization, auditory closure, temporal ordering and temporal resolution difficulties. Individuals with unilateral hearing loss in the right ear have more complaints than those with unilateral hearing loss in the left ear. Individuals with hearing loss in the left ear have more difficulties in auditory closure, temporal resolution, and temporal ordering.

  12. All-optical temporal integration of ultrafast pulse waveforms.

    Science.gov (United States)

    Park, Yongwoo; Ahn, Tae-Jung; Dai, Yitang; Yao, Jianping; Azaña, José

    2008-10-27

    An ultrafast all-optical temporal integrator is experimentally demonstrated. The demonstrated integrator is based on a very simple and practical solution only requiring the use of a widely available all-fiber passive component, namely a reflection uniform fiber Bragg grating (FBG). This design allows overcoming the severe speed (bandwidth) limitations of the previously demonstrated photonic integrator designs. We demonstrate temporal integration of a variety of ultrafast optical waveforms, including Gaussian, odd-symmetry Hermite Gaussian, and (odd-)symmetry double pulses, with temporal features as fast as ~6-ps, which is about one order of magnitude faster than in previous photonic integration demonstrations. The developed device is potentially interesting for a multitude of applications in all-optical computing and information processing, ultrahigh-speed optical communications, ultrafast pulse (de-)coding, shaping and metrology.

  13. Visual-Auditory Integration during Speech Imitation in Autism

    Science.gov (United States)

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  14. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    Science.gov (United States)

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system

  15. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration

    Directory of Open Access Journals (Sweden)

    Maren Stropahl

    2017-01-01

    Full Text Available There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users (n = 18, untreated mild to moderately hearing impaired individuals (n = 18 and normal hearing controls (n = 17. Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the

  16. The Role of Temporal Envelope and Fine Structure in Mandarin Lexical Tone Perception in Auditory Neuropathy Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Shuo Wang

    Full Text Available Temporal information in a signal can be partitioned into temporal envelope (E and fine structure (FS. Fine structure is important for lexical tone perception for normal-hearing (NH listeners, and listeners with sensorineural hearing loss (SNHL have an impaired ability to use FS in lexical tone perception due to the reduced frequency resolution. The present study was aimed to assess which of the acoustic aspects (E or FS played a more important role in lexical tone perception in subjects with auditory neuropathy spectrum disorder (ANSD and to determine whether it was the deficit in temporal resolution or frequency resolution that might lead to more detrimental effects on FS processing in pitch perception. Fifty-eight native Mandarin Chinese-speaking subjects (27 with ANSD, 16 with SNHL, and 15 with NH were assessed for (1 their ability to recognize lexical tones using acoustic E or FS cues with the "auditory chimera" technique, (2 temporal resolution as measured with temporal gap detection (TGD threshold, and (3 frequency resolution as measured with the Q(10dB values of the psychophysical tuning curves. Overall, 26.5%, 60.2%, and 92.1% of lexical tone responses were consistent with FS cues for tone perception for listeners with ANSD, SNHL, and NH, respectively. The mean TGD threshold was significantly higher for listeners with ANSD (11.9 ms than for SNHL (4.0 ms; p < 0.001 and NH (3.9 ms; p < 0.001 listeners, with no significant difference between SNHL and NH listeners. In contrast, the mean Q(10dB for listeners with SNHL (1.8 ± 0.4 was significantly lower than that for ANSD (3.5 ± 1.0; p < 0.001 and NH (3.4 ± 0.9; p < 0.001 listeners, with no significant difference between ANSD and NH listeners. These results suggest that reduced temporal resolution, as opposed to reduced frequency selectivity, in ANSD subjects leads to greater degradation of FS processing for pitch perception.

  17. Logarithmic temporal axis manipulation and its application for measuring auditory contributions in F0 control using a transformed auditory feedback procedure

    Science.gov (United States)

    Yanaga, Ryuichiro; Kawahara, Hideki

    2003-10-01

    A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.

  18. Echoic Memory: Investigation of Its Temporal Resolution by Auditory Offset Cortical Responses

    OpenAIRE

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temp...

  19. Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.

    Science.gov (United States)

    Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel

    2008-07-01

    The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  20. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    Science.gov (United States)

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  1. The role of auditory temporal cues in the fluency of stuttering adults

    Directory of Open Access Journals (Sweden)

    Juliana Furini

    Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.

  2. Integrating speech in time depends on temporal expectancies and attention.

    Science.gov (United States)

    Scharinger, Mathias; Steinberg, Johanna; Tavano, Alessandro

    2017-08-01

    Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Language-dependent changes in pitch-relevant neural activity in the auditory cortex reflect differential weighting of temporal attributes of pitch contours

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T.; Xu, Yi; Suresh, Chandan H.

    2016-01-01

    There remains a gap in our knowledge base about neural representation of pitch attributes that occur between onset and offset of dynamic, curvilinear pitch contours. The aim is to evaluate how language experience shapes processing of pitch contours as reflected in the amplitude of cortical pitch-specific response components. Responses were elicited from three nonspeech, bidirectional (falling-rising) pitch contours representative of Mandarin Tone 2 varying in location of the turning point with fixed onset and offset. At the frontocentral Fz electrode site, Na–Pb and Pb–Nb amplitude of the Chinese group was larger than the English group for pitch contours exhibiting later location of the turning point relative to the one with the earliest location. Chinese listeners’ amplitude was also greater than that of English in response to those same pitch contours with later turning points. At lateral temporal sites (T7/T8), Na–Pb amplitude was larger in Chinese listeners relative to English over the right temporal site. In addition, Pb–Nb amplitude of the Chinese group showed a rightward asymmetry. The pitch contour with its turning point located about halfway of total duration evoked a rightward asymmetry regardless of group. These findings suggest that neural mechanisms processing pitch in the right auditory cortex reflect experience-dependent modulation of sensitivity to weighted integration of changes in acceleration rates of rising and falling sections and the location of the turning point. PMID:28713201

  4. Reorganization in processing of spectral and temporal input in the rat posterior auditory field induced by environmental enrichment

    Science.gov (United States)

    Jakkamsetti, Vikram; Chang, Kevin Q.

    2012-01-01

    Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375

  5. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  6. Integrating what and when across the primate medial temporal lobe.

    Science.gov (United States)

    Naya, Yuji; Suzuki, Wendy A

    2011-08-05

    Episodic memory or memory for the detailed events in our lives is critically dependent on structures of the medial temporal lobe (MTL). A fundamental component of episodic memory is memory for the temporal order of items within an episode. To understand the contribution of individual MTL structures to temporal-order memory, we recorded single-unit activity and local field potential from three MTL areas (hippocampus and entorhinal and perirhinal cortex) and visual area TE as monkeys performed a temporal-order memory task. Hippocampus provided incremental timing signals from one item presentation to the next, whereas perirhinal cortex signaled the conjunction of items and their relative temporal order. Thus, perirhinal cortex appeared to integrate timing information from hippocampus with item information from visual sensory area TE.

  7. Temporal integration of loudness as a function of level

    DEFF Research Database (Denmark)

    Florentine, Mary; Buus, Søren; Poulsen, Torben

    1996-01-01

    Absolute thresholds were measured for 5-, 30-, and 200-ms stimuli using an adaptive, forced choice procedure. Temporal integration of loudness for these durations was also measured for a 1-kHz tone and for broadband noises over a range of 5-80 dB SL (noise) and 5-90 dB SL (tones). Results show...... that temporal integration (defined as the level difference between equally loud 5- and 200-ms stimuli) varies non-monotonically with level. The temporal integration is about 10-12 dB near threshold, increases to 18-19 dB when the 5-ms signal is about 56 dB SPL (tone) and 76 dB SPL (noise), decreases again...... that the growth of loudness may, at least in part, be consistent with the nonlinear input/output function of the basilar membrane....

  8. Spatio-temporal data analytics for wind energy integration

    CERN Document Server

    Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief presents spatio-temporal data analytics for wind energy integration using stochastic modeling and optimization methods. It explores techniques for efficiently integrating renewable energy generation into bulk power grids. The operational challenges of wind, and its variability are carefully examined. A spatio-temporal analysis approach enables the authors to develop Markov-chain-based short-term forecasts of wind farm power generation. To deal with the wind ramp dynamics, a support vector machine enhanced Markov model is introduced. The stochastic optimization of economic di

  9. Monkey’s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices

    Science.gov (United States)

    Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.

    2016-01-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975

  10. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  11. A Nonlinear Transmission Line Model of the Cochlea With Temporal Integration Accounts for Duration Effects in Threshold Fine Structure

    DEFF Research Database (Denmark)

    Verhey, Jesko L.; Mauermann, Manfred; Epp, Bastian

    2017-01-01

    For normal-hearing listeners, auditory pure-tone thresholds in quiet often show quasi periodic fluctuations when measured with a high frequency resolution, referred to as threshold fine structure. Threshold fine structure is dependent on the stimulus duration, with smaller fluctuations for short...... than for long signals. The present study demonstrates how this effect can be captured by a nonlinear and active model of the cochlear in combination with a temporal integration stage. Since this cochlear model also accounts for fine structure and connected level dependent effects, it is superior...

  12. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  13. Auditory temporal-order processing of vowel sequences by young and elderly listeners.

    Science.gov (United States)

    Fogerty, Daniel; Humes, Larry E; Kewley-Port, Diane

    2010-04-01

    This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18-31 years) and older (N=151; 60-88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners' SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age.

  14. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    Science.gov (United States)

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  15. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  16. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  17. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    International Nuclear Information System (INIS)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Pallier, C.; Oppenheim, C.; Rizzi, L.; Dehaene, S.

    2009-01-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  18. Auditory verbal hallucinations are related to cortical thinning in the left middle temporal gyrus of patients with schizophrenia.

    Science.gov (United States)

    Cui, Y; Liu, B; Song, M; Lipnicki, D M; Li, J; Xie, S; Chen, Y; Li, P; Lu, L; Lv, L; Wang, H; Yan, H; Yan, J; Zhang, H; Zhang, D; Jiang, T

    2018-01-01

    Auditory verbal hallucinations (AVHs) are one of the most common and severe symptoms of schizophrenia, but the neuroanatomical abnormalities underlying AVHs are not well understood. The present study aims to investigate whether AVHs are associated with cortical thinning. Participants were schizophrenia patients from four centers across China, 115 with AVHs and 93 without AVHs, as well as 261 healthy controls. All received 3 T T1-weighted brain scans, and whole brain vertex-wise cortical thickness was compared across groups. Correlations between AVH severity and cortical thickness were also determined. The left middle part of the middle temporal gyrus (MTG) was significantly thinner in schizophrenia patients with AVHs than in patients without AVHs and healthy controls. Inferences were made using a false discovery rate approach with a threshold at p < 0.05. Left MTG thickness did not differ between patients without AVHs and controls. These results were replicated by a meta-analysis showing them to be consistent across the four centers. Cortical thickness of the left MTG was also found to be inversely correlated with hallucination severity across all schizophrenia patients. The results of this multi-center study suggest that an abnormally thin left MTG could be involved in the pathogenesis of AVHs in schizophrenia.

  19. Role of consciousness in temporal integration of semantic information.

    Science.gov (United States)

    Yang, Yung-Hao; Tien, Yung-Hsuan; Yang, Pei-Ling; Yeh, Su-Ling

    2017-10-01

    Previous studies found that word meaning can be processed unconsciously. Yet it remains unknown whether temporally segregated words can be integrated into a holistic meaningful phrase without consciousness. The first four experiments were designed to examine this by sequentially presenting the first three words of Chinese four-word idioms as prime to one eye and dynamic Mondrians to the other (i.e., the continuous flash suppression paradigm; CFS). An unmasked target word followed the three masked words in a lexical decision task. Results from such invisible (CFS) condition were compared with the visible condition where the preceding words were superimposed on the Mondrians and presented to both eyes. Lower performance in behavioral experiments and larger N400 event-related potentials (ERP) component for incongruent- than congruent-ending words were found in the visible condition. However, no such congruency effect was found in the invisible condition, even with enhanced statistical power and top-down attention, and with several potential confounding factors (contrast-dependent processing, long interval, no conscious training) excluded. Experiment 5 demonstrated that familiarity of word orientation without temporal integration can be processed unconsciously, excluding the possibility of general insensitivity of our paradigm. The overall result pattern therefore suggests that consciousness plays an important role in semantic temporal integration in the conditions we tested.

  20. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  1. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Aging and Spectro-Temporal Integration of Speech

    Directory of Open Access Journals (Sweden)

    John H. Grose

    2016-10-01

    Full Text Available The purpose of this study was to determine the effects of age on the spectro-temporal integration of speech. The hypothesis was that the integration of speech fragments distributed over frequency, time, and ear of presentation is reduced in older listeners—even for those with good audiometric hearing. Younger, middle-aged, and older listeners (10 per group with good audiometric hearing participated. They were each tested under seven conditions that encompassed combinations of spectral, temporal, and binaural integration. Sentences were filtered into two bands centered at 500 Hz and 2500 Hz, with criterion bandwidth tailored for each participant. In some conditions, the speech bands were individually square wave interrupted at a rate of 10 Hz. Configurations of uninterrupted, synchronously interrupted, and asynchronously interrupted frequency bands were constructed that constituted speech fragments distributed across frequency, time, and ear of presentation. The over-arching finding was that, for most configurations, performance was not differentially affected by listener age. Although speech intelligibility varied across condition, there was no evidence of performance deficits in older listeners in any condition. This study indicates that age, per se, does not necessarily undermine the ability to integrate fragments of speech dispersed across frequency and time.

  3. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  4. Medial temporal lobe roles in human path integration.

    Directory of Open Access Journals (Sweden)

    Naohide Yamamoto

    Full Text Available Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  5. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Visual and kinesthetic locomotor imagery training integrated with auditory step rhythm for walking performance of patients with chronic stroke.

    Science.gov (United States)

    Kim, Jin-Seop; Oh, Duck-Won; Kim, Suhn-Yeop; Choi, Jong-Duk

    2011-02-01

    To compare the effect of visual and kinesthetic locomotor imagery training on walking performance and to determine the clinical feasibility of incorporating auditory step rhythm into the training. Randomized crossover trial. Laboratory of a Department of Physical Therapy. Fifteen subjects with post-stroke hemiparesis. Four locomotor imagery trainings on walking performance: visual locomotor imagery training, kinesthetic locomotor imagery training, visual locomotor imagery training with auditory step rhythm and kinesthetic locomotor imagery training with auditory step rhythm. The timed up-and-go test and electromyographic and kinematic analyses of the affected lower limb during one gait cycle. After the interventions, significant differences were found in the timed up-and-go test results between the visual locomotor imagery training (25.69 ± 16.16 to 23.97 ± 14.30) and the kinesthetic locomotor imagery training with auditory step rhythm (22.68 ± 12.35 to 15.77 ± 8.58) (P kinesthetic locomotor imagery training exhibited significantly increased activation in a greater number of muscles and increased angular displacement of the knee and ankle joints compared with the visual locomotor imagery training, and these effects were more prominent when auditory step rhythm was integrated into each form of locomotor imagery training. The activation of the hamstring during the swing phase and the gastrocnemius during the stance phase, as well as kinematic data of the knee joint, were significantly different for posttest values between the visual locomotor imagery training and the kinesthetic locomotor imagery training with auditory step rhythm (P kinesthetic locomotor imagery training than in the visual locomotor imagery training. The auditory step rhythm together with the locomotor imagery training produces a greater positive effect in improving the walking performance of patients with post-stroke hemiparesis.

  7. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    Directory of Open Access Journals (Sweden)

    Mark eLaing

    2015-10-01

    Full Text Available The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we use amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only or auditory-visual (AV trials in the scanner. On AV trials, the auditory and visual signal could have the same (AV congruent or different modulation rates (AV incongruent. Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for auditory-visual integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  8. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    Science.gov (United States)

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  10. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  11. Enabling an Integrated Rate-temporal Learning Scheme on Memristor

    Science.gov (United States)

    He, Wei; Huang, Kejie; Ning, Ning; Ramanathan, Kiruthika; Li, Guoqi; Jiang, Yu; Sze, Jiayin; Shi, Luping; Zhao, Rong; Pei, Jing

    2014-04-01

    Learning scheme is the key to the utilization of spike-based computation and the emulation of neural/synaptic behaviors toward realization of cognition. The biological observations reveal an integrated spike time- and spike rate-dependent plasticity as a function of presynaptic firing frequency. However, this integrated rate-temporal learning scheme has not been realized on any nano devices. In this paper, such scheme is successfully demonstrated on a memristor. Great robustness against the spiking rate fluctuation is achieved by waveform engineering with the aid of good analog properties exhibited by the iron oxide-based memristor. The spike-time-dependence plasticity (STDP) occurs at moderate presynaptic firing frequencies and spike-rate-dependence plasticity (SRDP) dominates other regions. This demonstration provides a novel approach in neural coding implementation, which facilitates the development of bio-inspired computing systems.

  12. Sustained Firing of Model Central Auditory Neurons Yields a Discriminative Spectro-temporal Representation for Natural Sounds

    OpenAIRE

    Carlin, Michael A.; Elhilali, Mounya

    2013-01-01

    The processing characteristics of neurons in the central auditory system are directly shaped by and reflect the statistics of natural acoustic environments, but the principles that govern the relationship between natural sound ensembles and observed responses in neurophysiological studies remain unclear. In particular, accumulating evidence suggests the presence of a code based on sustained neural firing rates, where central auditory neurons exhibit strong, persistent responses to their prefe...

  13. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    International Nuclear Information System (INIS)

    Nakamura, Miyako

    1988-01-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone. (author)

  14. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system

  15. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  16. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  17. Laevo: A Temporal Desktop Interface for Integrated Knowledge Work

    DEFF Research Database (Denmark)

    Jeuris, Steven; Houben, Steven; Bardram, Jakob

    2014-01-01

    Prior studies show that knowledge work is characterized by highly interlinked practices, including task, file and window management. However, existing personal information management tools primarily focus on a limited subset of knowledge work, forcing users to perform additional manual...... states and transitions of an activity. The life cycle is used to inform the design of Laevo, a temporal activity-centric desktop interface for personal knowledge work. Laevo allows users to structure work within dedicated workspaces, managed on a timeline. Through a centralized notification system which...... configuration work to integrate the different tools they use. In order to understand tool usage, we review literature on how users' activities are created and evolve over time as part of knowledge worker practices. From this we derive the activity life cycle, a conceptual framework describing the different...

  18. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  19. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    Science.gov (United States)

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  20. An integrative model of auditory phantom perception: tinnitus as a unified percept of interacting separable subnetworks.

    Science.gov (United States)

    De Ridder, Dirk; Vanneste, Sven; Weisz, Nathan; Londero, Alain; Schlee, Winnie; Elgoyhen, Ana Belen; Langguth, Berthold

    2014-07-01

    Tinnitus is a considered to be an auditory phantom phenomenon, a persistent conscious percept of a salient memory trace, externally attributed, in the absence of a sound source. It is perceived as a phenomenological unified coherent percept, binding multiple separable clinical characteristics, such as its loudness, the sidedness, the type (pure tone, noise), the associated distress and so on. A theoretical pathophysiological framework capable of explaining all these aspects in one model is highly needed. The model must incorporate both the deafferentation based neurophysiological models and the dysfunctional noise canceling model, and propose a 'tinnitus core' subnetwork. The tinnitus core can be defined as the minimal set of brain areas that needs to be jointly activated (=subnetwork) for tinnitus to be consciously perceived, devoid of its affective components. The brain areas involved in the other separable characteristics of tinnitus can be retrieved by studies on spontaneous resting state magnetic and electrical activity in people with tinnitus, evaluated for the specific aspect investigated and controlled for other factors. By combining these functional imaging studies with neuromodulation techniques some of the correlations are turned into causal relationships. Thereof, a heuristic pathophysiological framework is constructed, integrating the tinnitus perceptual core with the other tinnitus related aspects. This phenomenological unified percept of tinnitus can be considered an emergent property of multiple, parallel, dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. Communication between these different subnetworks is proposed to occur at hubs, brain areas that are involved in multiple subnetworks simultaneously. These hubs can take part in each separable subnetwork at different frequencies. Communication between the subnetworks is proposed to occur at

  1. Preservation of perceptual integration improves temporal stability of bimanual coordination in the elderly: an evidence of age-related brain plasticity.

    Science.gov (United States)

    Blais, Mélody; Martin, Elodie; Albaret, Jean-Michel; Tallet, Jessica

    2014-12-15

    Despite the apparent age-related decline in perceptual-motor performance, recent studies suggest that the elderly people can improve their reaction time when relevant sensory information are available. However, little is known about which sensory information may improve motor behaviour itself. Using a synchronization task, the present study investigates how visual and/or auditory stimulations could increase accuracy and stability of three bimanual coordination modes produced by elderly and young adults. Neurophysiological activations are recorded with ElectroEncephaloGraphy (EEG) to explore neural mechanisms underlying behavioural effects. Results reveal that the elderly stabilize all coordination modes when auditory or audio-visual stimulations are available, compared to visual stimulation alone. This suggests that auditory stimulations are sufficient to improve temporal stability of rhythmic coordination, even more in the elderly. This behavioural effect is primarily associated with increased attentional and sensorimotor-related neural activations in the elderly but similar perceptual-related activations in elderly and young adults. This suggests that, despite a degradation of attentional and sensorimotor neural processes, perceptual integration of auditory stimulations is preserved in the elderly. These results suggest that perceptual-related brain plasticity is, at least partially, conserved in normal aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  3. Visual form predictions facilitate auditory processing at the N1.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  4. Discrimination of acoustic communication signals by grasshoppers (Chorthippus biguttulus): temporal resolution, temporal integration, and the impact of intrinsic noise.

    Science.gov (United States)

    Ronacher, Bernhard; Wohlgemuth, Sandra; Vogel, Astrid; Krahe, Rüdiger

    2008-08-01

    A characteristic feature of hearing systems is their ability to resolve both fast and subtle amplitude modulations of acoustic signals. This applies also to grasshoppers, which for mate identification rely mainly on the characteristic temporal patterns of their communication signals. Usually the signals arriving at a receiver are contaminated by various kinds of noise. In addition to extrinsic noise, intrinsic noise caused by stochastic processes within the nervous system contributes to making signal recognition a difficult task. The authors asked to what degree intrinsic noise affects temporal resolution and, particularly, the discrimination of similar acoustic signals. This study aims at exploring the neuronal basis for sexual selection, which depends on exploiting subtle differences between basically similar signals. Applying a metric, by which the similarities of spike trains can be assessed, the authors investigated how well the communication signals of different individuals of the same species could be discriminated and correctly classified based on the responses of auditory neurons. This spike train metric yields clues to the optimal temporal resolution with which spike trains should be evaluated. (c) 2008 APA, all rights reserved

  5. The internal auditory clock: what can evoked potentials reveal about the analysis of temporal sound patterns, and abnormal states of consciousness?

    Science.gov (United States)

    Jones, S J

    2002-09-01

    Whereas in vision a large amount of information may in theory be extracted from instantaneous images, sound exists only in its temporal extent, and most of its information is contained in the pattern of changes over time. The "echoic memory" is a pre-attentive auditory sensory store in which sounds are apparently retained in full temporal detail for a period of a few seconds. From the long-latency auditory evoked potentials to spectro-temporal modulation of complex harmonic tones, at least two automatic sound analysis processes can be identified whose time constants suggest participation of the echoic memory. When a steady tone changes its pitch or timbre, "change-type" CP1, CN1 and CP2 potentials are maximally recorded near the vertex. These potentials appear to reflect a process concerned with the distribution of sound energy across the frequency spectrum. When, on the other hand, changes occur in the temporal pattern of tones (in which individual pitch changes are occurring at a rate sufficiently rapid for the C-potentials to be refractory), a large mismatch negativity (or MN1) and following positivity (MP2) are generated. The amplitude of these potentials is influenced by the degree of regularity of the pattern, larger responses being generated to a "deviant" tone when the pitch and time of occurrence of the "standards" are fully specified by the preceding pattern. At the sudden cessation of changes, on resumption of a steady pitch, a mismatch response is generated whose latency is determined with high precision (in the order of a few milliseconds) by the anticipated time of the next change, which did not in fact occur. The mismatch process, therefore, functions as spectro-temporal auditory pattern analyser, whose consequences are manifested each time the pattern changes. Since calibration of the passage of time is essential for all conscious and subconscious behaviour, is it possible that some states of unconsciousness may be directly due to disruption of

  6. Gone in a Flash: Manipulation of Audiovisual Temporal Integration Using Transcranial Magnetic Stimulation

    Directory of Open Access Journals (Sweden)

    Roy eHamilton

    2013-09-01

    Full Text Available While converging evidence implicates the right inferior parietal lobule in audiovisual integration, its role has not been fully elucidated by direct manipulation of cortical activity. Replicating and extending an experiment initially reported by Kamke, Vieth, Cottrell, and Mattingley (2012, we employed the sound-induced flash illusion, in which a single visual flash, when accompanied by two auditory tones, is misperceived as multiple flashes (Wilson, 1987; Shams, et al., 2000. Slow repetitive (1Hz TMS administered to the right angular gyrus, but not the right supramarginal gyrus, induced a transient decrease in the Peak Perceived Flashes (PPF, reflecting reduced susceptibility to the illusion. This finding independently confirms that perturbation of networks involved in multisensory integration can result in a more veridical representation of asynchronous auditory and visual events and that cross-modal integration is an active process in which the objective is the identification of a meaningful constellation of inputs, at times at the expense of accuracy.

  7. Comparison of capacity for diagnosis and visuality of auditory ossicles at different scanning angles in the computed tomography of temporal bone

    International Nuclear Information System (INIS)

    Ogura, Akio; Nakayama, Yoshiki

    1992-01-01

    Computed tomographic (CT) scanning has made significant contributions to the diagnosis and evaluation of temporal bone lesions by the thin-section, high-resolution techniques. However, these techniques involve greater radiation exposure to the lens of patients. A mean was thus sought for reducing the radiation exposure at different scanning angles such as +15 degrees and -10 degrees to the Reid's base line. Purposes of this study were to measure radiation exposure to the lens using the two tomographic planes and to compare the ability to visualize auditory ossicles and labyrinthine structures. Visual evaluation of tomographic images on auditory ossicles was made by blinded methods using four rankings by six radiologists. The statistical significance of the intergroup difference in the visualization of tomographic planes was assessed for a significance level of 0.01. Thermoluminescent dosimeter chips were placed on the cornea of tissue equivalent to the skull phantom to evaluate radiation exposure for two separate tomographic planes. As the result, tomographic plane at an angle of -10 degrees to Reid's base line allowed better visualization than the other plane for the malleus, incus, facial nerve canal, and tuba auditiva (p<0.01). Scannings at an angle of -10 degrees to Reid's base line reduced radiation exposure to approximately one-fiftieth (1/50) that with the scans at the other angle. (author)

  8. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  9. Long-term music training tunes how the brain temporally binds signals from multiple senses

    OpenAIRE

    Lee, HweeLing; Noppeney, Uta

    2011-01-01

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics–fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisua...

  10. Knockdown of the dyslexia-associated gene Kiaa0319 impairs temporal responses to speech stimuli in rat primary auditory cortex.

    Science.gov (United States)

    Centanni, T M; Booker, A B; Sloan, A M; Chen, F; Maher, B J; Carraway, R S; Khodaparast, N; Rennaker, R; LoTurco, J J; Kilgard, M P

    2014-07-01

    One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    OpenAIRE

    Maria Kon

    2015-01-01

    Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the ...

  12. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    Science.gov (United States)

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  13. Two visual targets for the price of one? : Pupil dilation shows reduced mental effort through temporal integration

    NARCIS (Netherlands)

    Wolff, Michael J; Scholz, Sabine; Akyürek, Elkan G; van Rijn, Hedderik

    In dynamic sensory environments, successive stimuli may be combined perceptually and represented as a single, comprehensive event by means of temporal integration. Such perceptual segmentation across time is intuitively plausible. However, the possible costs and benefits of temporal integration in

  14. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  15. Perceiving temporal regularity in music: The role of auditory event-related potentials (ERPs) in probing beat perception

    NARCIS (Netherlands)

    Honing, H.; Bouwer, F.L.; Háden, G.P.; Merchant, H.; de Lafuente, V.

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in

  16. Laminar differences in response to simple and spectro-temporally complex sounds in the primary auditory cortex of ketamine-anesthetized gerbils.

    Directory of Open Access Journals (Sweden)

    Markus K Schaefer

    Full Text Available In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA as well as local field potentials (LFP, and current source density (CSD waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of "call-specificity" in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns could indeed be important for encoding sounds that differ in their acoustic attributes.

  17. Bilingualism protects anterior temporal lobe integrity in aging.

    Science.gov (United States)

    Abutalebi, Jubin; Canini, Matteo; Della Rosa, Pasquale A; Sheung, Lo Ping; Green, David W; Weekes, Brendan S

    2014-09-01

    Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Iconic memory and parietofrontal network: fMRI study using temporal integration.

    Science.gov (United States)

    Saneyoshi, Ayako; Niimi, Ryosuke; Suetsugu, Tomoko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko

    2011-08-03

    We investigated the neural basis of iconic memory using functional magnetic resonance imaging. The parietofrontal network of selective attention is reportedly relevant to readout from iconic memory. We adopted a temporal integration task that requires iconic memory but not selective attention. The results showed that the task activated the parietofrontal network, confirming that the network is involved in readout from iconic memory. We further tested a condition in which temporal integration was performed by visual short-term memory but not by iconic memory. However, no brain region revealed higher activation for temporal integration by iconic memory than for temporal integration by visual short-term memory. This result suggested that there is no localized brain region specialized for iconic memory per se.

  19. New design for photonic temporal integration with combined high processing speed and long operation time window.

    Science.gov (United States)

    Asghari, Mohammad H; Park, Yongwoo; Azaña, José

    2011-01-17

    We propose and experimentally prove a novel design for implementing photonic temporal integrators simultaneously offering a high processing bandwidth and a long operation time window, namely a large time-bandwidth product. The proposed scheme is based on concatenating in series a time-limited ultrafast photonic temporal integrator, e.g. implemented using a fiber Bragg grating (FBG), with a discrete-time (bandwidth limited) optical integrator, e.g. implemented using an optical resonant cavity. This design combines the advantages of these two previously demonstrated photonic integrator solutions, providing a processing speed as high as that of the time-limited ultrafast integrator and an operation time window fixed by the discrete-time integrator. Proof-of-concept experiments are reported using a uniform fiber Bragg grating (as the original time-limited integrator) connected in series with a bulk-optics coherent interferometers' system (as a passive 4-points discrete-time photonic temporal integrator). Using this setup, we demonstrate accurate temporal integration of complex-field optical signals with time-features as fast as ~6 ps, only limited by the processing bandwidth of the FBG integrator, over time durations as long as ~200 ps, which represents a 4-fold improvement over the operation time window (~50 ps) of the original FBG integrator.

  20. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    Directory of Open Access Journals (Sweden)

    Maria Kon

    2015-05-01

    Full Text Available Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the mere succession of experiences. I question the presupposition that there is such an evident, clear distinction and suggest that, instead, how the distinction is drawn is context-dependent. After suggesting a way to modify the philosophical models of temporal experience to accommodate this context-dependency, I illustrate that these models can fruitfully incorporate features of research projects in embodied musical cognition. To do so I supplement a modified retentionalist model with aspects of recent work that links bodily movement with musical perception (Godøy, 2006; 2010a; Jensenius, Wanderley, Godøy, and Leman, 2010. The resulting model is shown to facilitate novel hypotheses, refine the notion of context-dependency and point towards means of extending the philosophical model and an existent research program.

  1. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  2. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  3. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  4. Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation

    Science.gov (United States)

    Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr

    2017-12-01

    Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the

  5. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  7. Temporal integration of loudness measured using categorical loudness scaling and matching procedures

    DEFF Research Database (Denmark)

    Valente, Daniel L.; Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    integration of loudness and previously reported nonmonotonic behavior observed at mid-sound pressure level levels is replicated with this procedure. Stimuli that are assigned to the same category are effectively matched in loudness, allowing the measurement of temporal integration with CLS without curve...

  8. A Temporal-Causal Modelling Approach to Integrated Contagion and Network Change in Social Networks

    NARCIS (Netherlands)

    Blankendaal, Romy; Parinussa, Sarah; Treur, Jan

    2016-01-01

    This paper introduces an integrated temporal-causal model for dynamics in social networks addressing the contagion principle by which states are affected mutually, and both the homophily principle and the more-becomes-more principle by which connections are adapted over time. The integrated model

  9. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  10. Deficit in visual temporal integration in autism spectrum disorders.

    Science.gov (United States)

    Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru

    2010-04-07

    Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.

  11. The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke

    OpenAIRE

    Leff, Alexander P.; Schofield, Thomas M.; Crinion, Jennifer T.; Seghier, Mohamed L.; Grogan, Alice; Green, David W.; Price, Cathy J.

    2009-01-01

    Competing theories of short-term memory function make specific predictions about the functional anatomy of auditory short-term memory and its role in language comprehension. We analysed high-resolution structural magnetic resonance images from 210 stroke patients and employed a novel voxel based analysis to test the relationship between auditory short-term memory and speech comprehension. Using digit span as an index of auditory short-term memory capacity we found that the structural integrit...

  12. Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration.

    Science.gov (United States)

    Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong

    2018-02-01

    Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    Science.gov (United States)

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  14. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  15. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    Directory of Open Access Journals (Sweden)

    Jordi Navarra

    Full Text Available The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1 or from different spatial positions (Experiment 2. A simultaneity judgment task (SJ was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony was obtained using temporal order judgments (TOJs instead of SJs (Experiment 3. Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading that we most frequently encounter in the outside world (e.g., while perceiving distant events.

  16. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  17. White Matter Integrity Dissociates Verbal Memory and Auditory Attention Span in Emerging Adults with Congenital Heart Disease.

    Science.gov (United States)

    Brewster, Ryan C; King, Tricia Z; Burns, Thomas G; Drossner, David M; Mahle, William T

    2015-01-01

    White matter disruptions have been identified in individuals with congenital heart disease (CHD). However, no specific theory-driven relationships between microstructural white matter disruptions and cognition have been established in CHD. We conducted a two-part study. First, we identified significant differences in fractional anisotropy (FA) of emerging adults with CHD using Tract-Based Spatial Statistics (TBSS). TBSS analyses between 22 participants with CHD and 18 demographically similar controls identified five regions of normal appearing white matter with significantly lower FA in CHD, and two higher. Next, two regions of lower FA in CHD were selected to examine theory-driven differential relationships with cognition: voxels along the left uncinate fasciculus (UF; a tract theorized to contribute to verbal memory) and voxels along the right middle cerebellar peduncle (MCP; a tract previously linked to attention). In CHD, a significant positive correlation between UF FA and memory was found, r(20)=.42, p=.049 (uncorrected). There was no correlation between UF and auditory attention span. A positive correlation between MCP FA and auditory attention span was found, r(20)=.47, p=.027 (uncorrected). There was no correlation between MCP and memory. In controls, no significant relationships were identified. These results are consistent with previous literature demonstrating lower FA in younger CHD samples, and provide novel evidence for disrupted white matter integrity in emerging adults with CHD. Furthermore, a correlational double dissociation established distinct white matter circuitry (UF and MCP) and differential cognitive correlates (memory and attention span, respectively) in young adults with CHD.

  18. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  19. Real-time fMRI neurofeedback to down-regulate superior temporal gyrus activity in patients with schizophrenia and auditory hallucinations: a proof-of-concept study.

    Science.gov (United States)

    Orlov, Natasza D; Giampietro, Vincent; O'Daly, Owen; Lam, Sheut-Ling; Barker, Gareth J; Rubia, Katya; McGuire, Philip; Shergill, Sukhwinder S; Allen, Paul

    2018-02-12

    Neurocognitive models and previous neuroimaging work posit that auditory verbal hallucinations (AVH) arise due to increased activity in speech-sensitive regions of the left posterior superior temporal gyrus (STG). Here, we examined if patients with schizophrenia (SCZ) and AVH could be trained to down-regulate STG activity using real-time functional magnetic resonance imaging neurofeedback (rtfMRI-NF). We also examined the effects of rtfMRI-NF training on functional connectivity between the STG and other speech and language regions. Twelve patients with SCZ and treatment-refractory AVH were recruited to participate in the study and were trained to down-regulate STG activity using rtfMRI-NF, over four MRI scanner visits during a 2-week training period. STG activity and functional connectivity were compared pre- and post-training. Patients successfully learnt to down-regulate activity in their left STG over the rtfMRI-NF training. Post- training, patients showed increased functional connectivity between the left STG, the left inferior prefrontal gyrus (IFG) and the inferior parietal gyrus. The post-training increase in functional connectivity between the left STG and IFG was associated with a reduction in AVH symptoms over the training period. The speech-sensitive region of the left STG is a suitable target region for rtfMRI-NF in patients with SCZ and treatment-refractory AVH. Successful down-regulation of left STG activity can increase functional connectivity between speech motor and perception regions. These findings suggest that patients with AVH have the ability to alter activity and connectivity in speech and language regions, and raise the possibility that rtfMRI-NF training could present a novel therapeutic intervention in SCZ.

  20. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  1. What you see is what you remember : Visual chunking by temporal integration enhances working memory

    NARCIS (Netherlands)

    Akyürek, Elkan G.; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-01-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the

  2. Deficits in Visuo-Motor Temporal Integration Impacts Manual Dexterity in Probable Developmental Coordination Disorder.

    Science.gov (United States)

    Nobusako, Satoshi; Sakai, Ayami; Tsujimoto, Taeko; Shuto, Takashi; Nishi, Yuki; Asano, Daiki; Furukawa, Emi; Zama, Takuro; Osumi, Michihiro; Shimada, Sotaro; Morioka, Shu; Nakai, Akio

    2018-01-01

    The neurological basis of developmental coordination disorder (DCD) is thought to be deficits in the internal model and mirror-neuron system (MNS) in the parietal lobe and cerebellum. However, it is not clear if the visuo-motor temporal integration in the internal model and automatic-imitation function in the MNS differs between children with DCD and those with typical development (TD). The current study aimed to investigate these differences. Using the manual dexterity test of the Movement Assessment Battery for Children (second edition), the participants were either assigned to the probable DCD (pDCD) group or TD group. The former was comprised of 29 children with clumsy manual dexterity, while the latter consisted of 42 children with normal manual dexterity. Visuo-motor temporal integration ability and automatic-imitation function were measured using the delayed visual feedback detection task and motor interference task, respectively. Further, the current study investigated whether autism-spectrum disorder (ASD) traits, attention-deficit hyperactivity disorder (ADHD) traits, and depressive symptoms differed among the two groups, since these symptoms are frequent comorbidities of DCD. In addition, correlation and multiple regression analyses were performed to extract factors affecting clumsy manual dexterity. In the results, the delay-detection threshold (DDT) and steepness of the delay-detection probability curve, which indicated visuo-motor temporal integration ability, were significantly prolonged and decreased, respectively, in children with pDCD. The interference effect, which indicated automatic-imitation function, was also significantly reduced in this group. These results highlighted that children with clumsy manual dexterity have deficits in visuo-motor temporal integration and automatic-imitation function. There was a significant correlation between manual dexterity, and measures of visuo-motor temporal integration, and ASD traits and ADHD traits and

  3. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  4. MR and genetics in schizophrenia: Focus on auditory hallucinations

    International Nuclear Information System (INIS)

    Aguilar, Eduardo Jesus; Sanjuan, Julio; Garcia-Marti, Gracian; Lull, Juan Jose; Robles, Montserrat

    2008-01-01

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented

  5. MR and genetics in schizophrenia: Focus on auditory hallucinations

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Eduardo Jesus [Psychiatric Service, Clinic University Hospital, Avda. Blasco Ibanez 17, 46010 Valencia (Spain)], E-mail: eduardoj.aguilar@gmail.com; Sanjuan, Julio [Psychiatric Unit, Faculty of Medicine, Valencia University, Avda. Blasco Ibanez 17, 46010 Valencia (Spain); Garcia-Marti, Gracian [Department of Radiology, Hospital Quiron, Avda. Blasco Ibanez 14, 46010 Valencia (Spain); Lull, Juan Jose; Robles, Montserrat [ITACA Institute, Polytechnic University of Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

    2008-09-15

    Although many structural and functional abnormalities have been related to schizophrenia, until now, no single biological marker has been of diagnostic clinical utility. One way to obtain more valid findings is to focus on the symptoms instead of the syndrome. Auditory hallucinations (AHs) are one of the most frequent and reliable symptoms of psychosis. We present a review of our main findings, using a multidisciplinary approach, on auditory hallucinations. Firstly, by applying a new auditory emotional paradigm specific for psychosis, we found an enhanced activation of limbic and frontal brain areas in response to emotional words in these patients. Secondly, in a voxel-based morphometric study, we obtained a significant decreased gray matter concentration in the insula (bilateral), superior temporal gyrus (bilateral), and amygdala (left) in patients compared to healthy subjects. This gray matter loss was directly related to the intensity of AH. Thirdly, using a new method for looking at areas of coincidence between gray matter loss and functional activation, large coinciding brain clusters were found in the left and right middle temporal and superior temporal gyri. Finally, we summarized our main findings from our studies of the molecular genetics of auditory hallucinations. Taking these data together, an integrative model to explain the neurobiological basis of this psychotic symptom is presented.

  6. FACILITATING INTEGRATED SPATIO-TEMPORAL VISUALIZATION AND ANALYSIS OF HETEROGENEOUS ARCHAEOLOGICAL AND PALAEOENVIRONMENTAL RESEARCH DATA

    Directory of Open Access Journals (Sweden)

    C. Willmes

    2012-07-01

    Full Text Available In the context of the Collaborative Research Centre 806 "Our way to Europe" (CRC806, a research database is developed for integrating data from the disciplines of archaeology, the geosciences and the cultural sciences to facilitate integrated access to heterogeneous data sources. A practice-oriented data integration concept and its implementation is presented in this contribution. The data integration approach is based on the application of Semantic Web Technology and is applied to the domains of archaeological and palaeoenvironmental data. The aim is to provide integrated spatio-temporal access to an existing wealth of data to facilitate research on the integrated data basis. For the web portal of the CRC806 research database (CRC806-Database, a number of interfaces and applications have been evaluated, developed and implemented for exposing the data to interactive analysis and visualizations.

  7. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  8. What You See Is What You Remember: Visual Chunking by Temporal Integration Enhances Working Memory.

    Science.gov (United States)

    Akyürek, Elkan G; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-12-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.

  9. Temporal integration of loudness, loudness discrimination, and the form of the loudness function

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1997-01-01

    Temporal integration for loudness of 5-kHz tones was measured as a function of level between 2 and 60 dB SL. Absolute thresholds and levels required to produce equal loudness were measured for 2-, 10-, 50- and 250-ms tones using adaptive, two interval, two alternative forced choice procedures....... The procedure for loudness balances is new and obtained concurrent measurements for ten tone pairs in ten interleaved tracks. Each track converged at the level required to make the variable stimulus just louder than the fixed stimulus. Thus, the data yield estimates of the just noticeable difference...... for loudness level andtemporal integration for loudness. Results for four listeners show that the amount of temporal integration, defined as the level difference between equally loud short and long tones, varies markedly with level and is largest at moderate levels. The effect of level increases...

  10. Parameters affecting temporal resolution of Time Resolved Integrative Optical Neutron Detector (TRION)

    International Nuclear Information System (INIS)

    Mor, I; Vartsky, D; Bar, D; Feldman, G; Goldberg, M B; Brandis, M; Dangendorf, V; Tittelmeier, K; Bromberger, B; Weierganz, M

    2013-01-01

    The Time-Resolved Integrative Optical Neutron (TRION) detector was developed for Fast Neutron Resonance Radiography (FNRR), a fast-neutron transmission imaging method that exploits characteristic energy-variations of the total scattering cross-section in the E n = 1–10 MeV range to detect specific elements within a radiographed object. As opposed to classical event-counting time of flight (ECTOF), it integrates the detector signal during a well-defined neutron Time of Flight window corresponding to a pre-selected energy bin, e.g., the energy-interval spanning a cross-section resonance of an element such as C, O and N. The integrative characteristic of the detector permits loss-free operation at very intense, pulsed neutron fluxes, at a cost however, of recorded temporal resolution degradation This work presents a theoretical and experimental evaluation of detector related parameters which affect temporal resolution of the TRION system

  11. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  12. Sensory augmentation: integration of an auditory compass signal into human perception of space

    Science.gov (United States)

    Schumann, Frank; O’Regan, J. Kevin

    2017-01-01

    Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences. PMID:28195187

  13. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  14. Patterns of morphological integration between parietal and temporal areas in the human skull.

    Science.gov (United States)

    Bruner, Emiliano; Pereira-Pedro, Ana Sofia; Bastir, Markus

    2017-10-01

    Modern humans have evolved bulging parietal areas and large, projecting temporal lobes. Both changes, largely due to a longitudinal expansion of these cranial and cerebral elements, were hypothesized to be the result of brain evolution and cognitive variations. Nonetheless, the independence of these two morphological characters has not been evaluated. Because of structural and functional integration among cranial elements, changes in the position of the temporal poles can be a secondary consequence of parietal bulging and reorientation of the head axis. In this study, we use geometric morphometrics to test the correlation between parietal shape and the morphology of the endocranial base in a sample of adult modern humans. Our results suggest that parietal proportions show no correlation with the relative position of the temporal poles within the spatial organization of the endocranial base. The vault and endocranial base are likely to be involved in distinct morphogenetic processes, with scarce or no integration between these two districts. Therefore, the current evidence rejects the hypothesis of reciprocal morphological influences between parietal and temporal morphology, suggesting that evolutionary spatial changes in these two areas may have been independent. However, parietal bulging exerts a visible effect on the rotation of the cranial base, influencing head position and orientation. This change can have had a major relevance in the reorganization of the head functional axis. © 2017 Wiley Periodicals, Inc.

  15. Thalamotemporal impairment in temporal lobe epilepsy: a combined MRI analysis of structure, integrity, and connectivity.

    Science.gov (United States)

    Keller, Simon S; O'Muircheartaigh, Jonathan; Traynor, Catherine; Towgood, Karren; Barker, Gareth J; Richardson, Mark P

    2014-02-01

    Thalamic abnormality in temporal lobe epilepsy (TLE) is well known from imaging studies, but evidence is lacking regarding connectivity profiles of the thalamus and their involvement in the disease process. We used a novel multisequence magnetic resonance imaging (MRI) protocol to elucidate the relationship between mesial temporal and thalamic pathology in TLE. For 23 patients with TLE and 23 healthy controls, we performed T1 -weighted (for analysis of tissue structure), diffusion tensor imaging (tissue connectivity), and T1 and T2 relaxation (tissue integrity) MRI across the whole brain. We used connectivity-based segmentation to determine connectivity patterns of thalamus to ipsilateral cortical regions (occipital, parietal, prefrontal, postcentral, precentral, and temporal). We subsequently determined volumes, mean tractography streamlines, and mean T1 and T2 relaxometry values for each thalamic segment preferentially connecting to a given cortical region, and of the hippocampus and entorhinal cortex. As expected, patients had significant volume reduction and increased T2 relaxation time in ipsilateral hippocampus and entorhinal cortex. There was bilateral volume loss, mean streamline reduction, and T2 increase of the thalamic segment preferentially connected to temporal lobe, corresponding to anterior, dorsomedial, and pulvinar thalamic regions, with no evidence of significant change in any other thalamic segments. Left and right thalamotemporal segment volume and T2 were significantly correlated with volume and T2 of ipsilateral (epileptogenic), but not contralateral (nonepileptogenic), mesial temporal structures. These convergent and robust data indicate that thalamic abnormality in TLE is restricted to the area of the thalamus that is preferentially connected to the epileptogenic temporal lobe. The degree of thalamic pathology is related to the extent of mesial temporal lobe damage in TLE. © 2014 The Authors. Epilepsia published by Wiley Periodicals, Inc

  16. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    Science.gov (United States)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to

  17. Comparing the influence of spectro-temporal integration in computational speech segregation

    DEFF Research Database (Denmark)

    Bentsen, Thomas; May, Tobias; Kressner, Abigail Anne

    2016-01-01

    The goal of computational speech segregation systems is to automatically segregate a target speaker from interfering maskers. Typically, these systems include a feature extraction stage in the front-end and a classification stage in the back-end. A spectrotemporal integration strategy can...... be applied in either the frontend, using the so-called delta features, or in the back-end, using a second classifier that exploits the posterior probability of speech from the first classifier across a spectro-temporal window. This study systematically analyzes the influence of such stages on segregation...... metric that comprehensively predicts computational segregation performance and correlates well with intelligibility. The outcome of this study could help to identify the most effective spectro-temporal integration strategy for computational segregation systems....

  18. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  19. Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration.

    Science.gov (United States)

    Zhao, Wanying; Riggs, Kevin; Schindler, Igor; Holle, Henning

    2018-02-21

    Language and action naturally occur together in the form of cospeech gestures, and there is now convincing evidence that listeners display a strong tendency to integrate semantic information from both domains during comprehension. A contentious question, however, has been which brain areas are causally involved in this integration process. In previous neuroimaging studies, left inferior frontal gyrus (IFG) and posterior middle temporal gyrus (pMTG) have emerged as candidate areas; however, it is currently not clear whether these areas are causally or merely epiphenomenally involved in gesture-speech integration. In the present series of experiments, we directly tested for a potential critical role of IFG and pMTG by observing the effect of disrupting activity in these areas using transcranial magnetic stimulation in a mixed gender sample of healthy human volunteers. The outcome measure was performance on a Stroop-like gesture task (Kelly et al., 2010a), which provides a behavioral index of gesture-speech integration. Our results provide clear evidence that disrupting activity in IFG and pMTG selectively impairs gesture-speech integration, suggesting that both areas are causally involved in the process. These findings are consistent with the idea that these areas play a joint role in gesture-speech integration, with IFG regulating strategic semantic access via top-down signals acting upon temporal storage areas. SIGNIFICANCE STATEMENT Previous neuroimaging studies suggest an involvement of inferior frontal gyrus and posterior middle temporal gyrus in gesture-speech integration, but findings have been mixed and due to methodological constraints did not allow inferences of causality. By adopting a virtual lesion approach involving transcranial magnetic stimulation, the present study provides clear evidence that both areas are causally involved in combining semantic information arising from gesture and speech. These findings support the view that, rather than being

  20. Integration of temporal and spatial properties of dynamic connectivity networks for automatic diagnosis of brain disease.

    Science.gov (United States)

    Jie, Biao; Liu, Mingxia; Shen, Dinggang

    2018-07-01

    Functional connectivity networks (FCNs) using resting-state functional magnetic resonance imaging (rs-fMRI) have been applied to the analysis and diagnosis of brain disease, such as Alzheimer's disease (AD) and its prodrome, i.e., mild cognitive impairment (MCI). Different from conventional studies focusing on static descriptions on functional connectivity (FC) between brain regions in rs-fMRI, recent studies have resorted to dynamic connectivity networks (DCNs) to characterize the dynamic changes of FC, since dynamic changes of FC may indicate changes in macroscopic neural activity patterns in cognitive and behavioral aspects. However, most of the existing studies only investigate the temporal properties of DCNs (e.g., temporal variability of FC between specific brain regions), ignoring the important spatial properties of the network (e.g., spatial variability of FC associated with a specific brain region). Also, emerging evidence on FCNs has suggested that, besides temporal variability, there is significant spatial variability of activity foci over time. Hence, integrating both temporal and spatial properties of DCNs can intuitively promote the performance of connectivity-network-based learning methods. In this paper, we first define a new measure to characterize the spatial variability of DCNs, and then propose a novel learning framework to integrate both temporal and spatial variabilities of DCNs for automatic brain disease diagnosis. Specifically, we first construct DCNs from the rs-fMRI time series at successive non-overlapping time windows. Then, we characterize the spatial variability of a specific brain region by computing the correlation of functional sequences (i.e., the changing profile of FC between a pair of brain regions within all time windows) associated with this region. Furthermore, we extract both temporal variabilities and spatial variabilities from DCNs as features, and integrate them for classification by using manifold regularized multi

  1. Seizure Control and Memory Impairment Are Related to Disrupted Brain Functional Integration in Temporal Lobe Epilepsy.

    Science.gov (United States)

    Park, Chang-Hyun; Choi, Yun Seo; Jung, A-Reum; Chung, Hwa-Kyoung; Kim, Hyeon Jin; Yoo, Jeong Hyun; Lee, Hyang Woon

    2017-01-01

    Brain functional integration can be disrupted in patients with temporal lobe epilepsy (TLE), but the clinical relevance of this disruption is not completely understood. The authors hypothesized that disrupted functional integration over brain regions remote from, as well as adjacent to, the seizure focus could be related to clinical severity in terms of seizure control and memory impairment. Using resting-state functional MRI data acquired from 48 TLE patients and 45 healthy controls, the authors mapped functional brain networks and assessed changes in a network parameter of brain functional integration, efficiency, to examine the distribution of disrupted functional integration within and between brain regions. The authors assessed whether the extent of altered efficiency was influenced by seizure control status and whether the degree of altered efficiency was associated with the severity of memory impairment. Alterations in the efficiency were observed primarily near the subcortical region ipsilateral to the seizure focus in TLE patients. The extent of regional involvement was greater in patients with poor seizure control: it reached the frontal, temporal, occipital, and insular cortices in TLE patients with poor seizure control, whereas it was limited to the limbic and parietal cortices in TLE patients with good seizure control. Furthermore, TLE patients with poor seizure control experienced more severe memory impairment, and this was associated with lower efficiency in the brain regions with altered efficiency. These findings indicate that the distribution of disrupted brain functional integration is clinically relevant, as it is associated with seizure control status and comorbid memory impairment.

  2. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    Science.gov (United States)

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  3. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  4. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  5. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.

    Science.gov (United States)

    Hickok, Gregory; Buchsbaum, Bradley; Humphries, Colin; Muftuler, Tugan

    2003-07-01

    The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.

  6. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  7. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  8. Bilateral duplication of the internal auditory canal

    International Nuclear Information System (INIS)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu; Koo, Ja-Won

    2007-01-01

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  9. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  10. Temporal integration of loudness in listeners with hearing losses of primarily cochlear origin

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1999-01-01

    To investigate how hearing loss of primarily cochlear origin affects the loudness of brief tones, loudness matches between 5- and 200-ms tones were obtained as a function of level for 15 listeners with cochlear impairments and for seven age-matched controls. Three frequencies, usually 0.5, 1, and 4...... of temporal integration—defined as the level difference between equally loud short and long tones—varied nonmonotonically with level and was largest at moderate levels. No consistent effect of frequency was apparent. The impaired listeners varied widely, but most showed a clear effect of level on the amount...... of temporal integration. Overall, their results appear consistent with expectations based on knowledge of the general properties of their loudness-growth functions and the equal-loudness-ratio hypothesis, which states that the loudness ratio between equal-SPL long and brief tones is the same at all SPLs...

  11. Auditory Pattern Memory and Group Signal Detection

    National Research Council Canada - National Science Library

    Sorkin, Robert

    1997-01-01

    .... The experiments with temporally-coded auditory patterns showed how listeners' attention is influenced by the position and the amount of information carried by different segments of the pattern...

  12. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    Science.gov (United States)

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  13. Effects of temporal integration on the shape of visual backward masking functions.

    Science.gov (United States)

    Francis, Gregory; Cho, Yang Seok

    2008-10-01

    Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be monotonic increasing but other times is U-shaped. Theories of backward masking have long hypothesized that temporal integration of the target and mask influences properties of masking but have not connected the influence of integration with the shape of the masking function. With two experiments that vary the spatial properties of the target and mask, the authors provide evidence that temporal integration of the stimuli plays a critical role in determining the shape of the masking function. The resulting data both challenge current theories of backward masking and indicate what changes to the theories are needed to account for the new data. The authors further discuss the implication of the findings for uses of backward masking to explore other aspects of cognition.

  14. SPATIO-TEMPORAL ESTIMATION OF INTEGRATED WATER VAPOUR OVER THE MALAYSIAN PENINSULA DURING MONSOON SEASON

    Directory of Open Access Journals (Sweden)

    S. Salihin

    2017-10-01

    Full Text Available This paper provides the precise information on spatial-temporal distribution of water vapour that was retrieved from Zenith Path Delay (ZPD which was estimated by Global Positioning System (GPS processing over the Malaysian Peninsular. A time series analysis of these ZPD and Integrated Water Vapor (IWV values was done to capture the characteristic on their seasonal variation during monsoon seasons. This study was found that the pattern and distribution of atmospheric water vapour over Malaysian Peninsular in whole four years periods were influenced by two inter-monsoon and two monsoon seasons which are First Inter-monsoon, Second Inter-monsoon, Southwest monsoon and Northeast monsoon.

  15. In search of an auditory engram

    OpenAIRE

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that mo...

  16. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  17. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  18. Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes.

    Science.gov (United States)

    Davey, James; Thompson, Hannah E; Hallam, Glyn; Karapanagiotidis, Theodoros; Murphy, Charlotte; De Caso, Irene; Krieger-Redwood, Katya; Bernhardt, Boris C; Smallwood, Jonathan; Jefferies, Elizabeth

    2016-08-15

    Making sense of the world around us depends upon selectively retrieving information relevant to our current goal or context. However, it is unclear whether selective semantic retrieval relies exclusively on general control mechanisms recruited in demanding non-semantic tasks, or instead on systems specialised for the control of meaning. One hypothesis is that the left posterior middle temporal gyrus (pMTG) is important in the controlled retrieval of semantic (not non-semantic) information; however this view remains controversial since a parallel literature links this site to event and relational semantics. In a functional neuroimaging study, we demonstrated that an area of pMTG implicated in semantic control by a recent meta-analysis was activated in a conjunction of (i) semantic association over size judgements and (ii) action over colour feature matching. Under these circumstances the same region showed functional coupling with the inferior frontal gyrus - another crucial site for semantic control. Structural and functional connectivity analyses demonstrated that this site is at the nexus of networks recruited in automatic semantic processing (the default mode network) and executively demanding tasks (the multiple-demand network). Moreover, in both task and task-free contexts, pMTG exhibited functional properties that were more similar to ventral parts of inferior frontal cortex, implicated in controlled semantic retrieval, than more dorsal inferior frontal sulcus, implicated in domain-general control. Finally, the pMTG region was functionally correlated at rest with other regions implicated in control-demanding semantic tasks, including inferior frontal gyrus and intraparietal sulcus. We suggest that pMTG may play a crucial role within a large-scale network that allows the integration of automatic retrieval in the default mode network with executively-demanding goal-oriented cognition, and that this could support our ability to understand actions and non

  19. Integrating Future Land Use Scenarios to Evaluate the Spatio-Temporal Dynamics of Landscape Ecological Security

    Directory of Open Access Journals (Sweden)

    Yi Lu

    2016-11-01

    Full Text Available Urban ecological security is the basic principle of national ecological security. However, analyses of the spatial and temporal dynamics of ecological security remain limited, especially those that consider different scenarios of urban development. In this study, an integrated method is proposed that combines the Conversion of Land Use and its Effects (CLUE-S model with the Pressure–State–Response (P-S-R framework to assess landscape ecological security (LES in Huangshan City, China under two scenarios. Our results suggest the following conclusions: (1 the spatial and temporal dynamics of ecological security are closely related to the urbanization process; (2 although the average values of landscape ecological security are similar under different scenarios, the areas of relatively high security levels vary considerably; and (3 spatial heterogeneity in ecological security exists between different districts and counties, and the city center and its vicinity may face relatively serious declines in ecological security in the future. Overall, the proposed method not only illustrates the spatio-temporal dynamics of landscape ecological security under different scenarios but also reveals the anthropogenic effects on ecosystems by differentiating between causes, effects, and human responses at the landscape scale. This information is of great significance to decision-makers for future urban planning and management.

  20. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise.

    Science.gov (United States)

    White-Schwoch, Travis; Davies, Evan C; Thompson, Elaine C; Woodruff Carr, Kali; Nicol, Trent; Bradlow, Ann R; Kraus, Nina

    2015-10-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But this auditory learning rarely occurs in ideal listening conditions-children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3-5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features-even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response

  1. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    Science.gov (United States)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a

  2. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  3. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  4. Category-specific responses to faces and objects in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Kari L Hoffman

    2008-03-01

    Full Text Available Auditory and visual signals often occur together, and the two sensory channels are known to infl uence each other to facilitate perception. The neural basis of this integration is not well understood, although other forms of multisensory infl uences have been shown to occur at surprisingly early stages of processing in cortex. Primary visual cortex neurons can show frequency-tuning to auditory stimuli, and auditory cortex responds selectively to certain somatosensory stimuli, supporting the possibility that complex visual signals may modulate early stages of auditory processing. To elucidate which auditory regions, if any, are responsive to complex visual stimuli, we recorded from auditory cortex and the superior temporal sulcus while presenting visual stimuli consisting of various objects, neutral faces, and facial expressions generated during vocalization. Both objects and conspecifi c faces elicited robust fi eld potential responses in auditory cortex sites, but the responses varied by category: both neutral and vocalizing faces had a highly consistent negative component (N100 followed by a broader positive component (P180 whereas object responses were more variable in time and shape, but could be discriminated consistently from the responses to faces. The face response did not vary within the face category, i.e., for expressive vs. neutral face stimuli. The presence of responses for both objects and neutral faces suggests that auditory cortex receives highly informative visual input that is not restricted to those stimuli associated with auditory components. These results reveal selectivity for complex visual stimuli in a brain region conventionally described as non-visual unisensory cortex.

  5. Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection.

    Science.gov (United States)

    Uno, Takeshi; Kawai, Kensuke; Sakai, Katsuyuki; Wakebe, Toshihiro; Ibaraki, Takuya; Kunii, Naoto; Matsuo, Takeshi; Saito, Nobuhito

    2015-01-01

    Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG) and superior temporal sulcus (STS) are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA), which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.

  6. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis

  7. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  8. Rod phototransduction determines the trade-off of temporal integration and speed of vision in dark-adapted toads.

    Science.gov (United States)

    Haldin, Charlotte; Nymark, Soile; Aho, Ann-Christine; Koskelainen, Ari; Donner, Kristian

    2009-05-06

    Human vision is approximately 10 times less sensitive than toad vision on a cool night. Here, we investigate (1) how far differences in the capacity for temporal integration underlie such differences in sensitivity and (2) whether the response kinetics of the rod photoreceptors can explain temporal integration at the behavioral level. The toad was studied as a model that allows experimentation at different body temperatures. Sensitivity, integration time, and temporal accuracy of vision were measured psychophysically by recording snapping at worm dummies moving at different velocities. Rod photoresponses were studied by ERG recording across the isolated retina. In both types of experiments, the general timescale of vision was varied by using two temperatures, 15 and 25 degrees C. Behavioral integration times were 4.3 s at 15 degrees C and 0.9 s at 25 degrees C, and rod integration times were 4.2-4.3 s at 15 degrees C and 1.0-1.3 s at 25 degrees C. Maximal behavioral sensitivity was fivefold lower at 25 degrees C than at 15 degrees C, which can be accounted for by inability of the "warm" toads to integrate light over longer times than the rods. However, the long integration time at 15 degrees C, allowing high sensitivity, degraded the accuracy of snapping toward quickly moving worms. We conclude that temporal integration explains a considerable part of all variation in absolute visual sensitivity. The strong correlation between rods and behavior suggests that the integration time of dark-adapted vision is set by rod phototransduction at the input to the visual system. This implies that there is an inexorable trade-off between temporal integration and resolution.

  9. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  10. Brain bases for auditory stimulus-driven figure-ground segregation.

    Science.gov (United States)

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  11. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  12. Psychological Contract Development: An Integration of Existing Knowledge to Form a Temporal Model

    Directory of Open Access Journals (Sweden)

    Kelly Windle

    2014-07-01

    Full Text Available The psychological contract has received substantial theoretical attention over the past two decades as a popular framework within which to examine contemporary employment relationships. Previous research mostly examines breach and violation of the psychological contract and its impact on employee organization outcomes. Few studies have employed longitudinal, prospective research designs to investigate the psychological contract and as a result, psychological contract content and formation are incompletely understood. It is argued that employment relationships may be better proactively managed with greater understanding of formation and changes in the psychological contract. We examine existing psychological contract literature to identify five key factors proposed to contribute to the formation of psychological contracts. We extend the current research by integrating these factors for the first time into a temporal model of psychological contract development.

  13. ePRISM: A case study in multiple proxy and mixed temporal resolution integration

    Science.gov (United States)

    Robinson, Marci M.; Dowsett, Harry J.

    2010-01-01

    As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.

  14. Convergent validity of the Integrated Visual and Auditory Continuous Performance Test (IVA+Plus): associations with working memory, processing speed, and behavioral ratings.

    Science.gov (United States)

    Arble, Eamonn; Kuentzel, Jeffrey; Barnett, Douglas

    2014-05-01

    Though the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus) is commonly used by researchers and clinicians, few investigations have assessed its convergent and discriminant validity, especially with regard to its use with children. The present study details correlates of the IVA + Plus using measures of cognitive ability and ratings of child behavior (parent and teacher), drawing upon a sample of 90 psychoeducational evaluations. Scores from the IVA + Plus correlated significantly with the Working Memory and Processing Speed Indexes from the Fourth Edition of the Wechsler Intelligence Scales for Children (WISC-IV), though fewer and weaker significant correlations were seen with behavior ratings scales, and significant associations also occurred with WISC-IV Verbal Comprehension and Perceptual Reasoning. The overall pattern of relations is supportive of the validity of the IVA + Plus; however, general cognitive ability was associated with better performance on most of the primary scores of the IVA + Plus, suggesting that interpretation should take intelligence into account.

  15. Temporal integration and 1/f power scaling in a circuit model of cerebellar interneurons.

    Science.gov (United States)

    Maex, Reinoud; Gutkin, Boris

    2017-07-01

    Inhibitory interneurons interconnected via electrical and chemical (GABA A receptor) synapses form extensive circuits in several brain regions. They are thought to be involved in timing and synchronization through fast feedforward control of principal neurons. Theoretical studies have shown, however, that whereas self-inhibition does indeed reduce response duration, lateral inhibition, in contrast, may generate slow response components through a process of gradual disinhibition. Here we simulated a circuit of interneurons (stellate and basket cells) of the molecular layer of the cerebellar cortex and observed circuit time constants that could rise, depending on parameter values, to >1 s. The integration time scaled both with the strength of inhibition, vanishing completely when inhibition was blocked, and with the average connection distance, which determined the balance between lateral and self-inhibition. Electrical synapses could further enhance the integration time by limiting heterogeneity among the interneurons and by introducing a slow capacitive current. The model can explain several observations, such as the slow time course of OFF-beam inhibition, the phase lag of interneurons during vestibular rotation, or the phase lead of Purkinje cells. Interestingly, the interneuron spike trains displayed power that scaled approximately as 1/ f at low frequencies. In conclusion, stellate and basket cells in cerebellar cortex, and interneuron circuits in general, may not only provide fast inhibition to principal cells but also act as temporal integrators that build a very short-term memory. NEW & NOTEWORTHY The most common function attributed to inhibitory interneurons is feedforward control of principal neurons. In many brain regions, however, the interneurons are densely interconnected via both chemical and electrical synapses but the function of this coupling is largely unknown. Based on large-scale simulations of an interneuron circuit of cerebellar cortex, we

  16. Modulation of isochronous movements in a flexible environment: links between motion and auditory experience.

    Science.gov (United States)

    Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego

    2014-06-01

    The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.

  17. Person perception involves functional integration between the extrastriate body area and temporal pole.

    Science.gov (United States)

    Greven, Inez M; Ramsey, Richard

    2017-02-01

    The majority of human neuroscience research has focussed on understanding functional organisation within segregated patches of cortex. The ventral visual stream has been associated with the detection of physical features such as faces and body parts, whereas the theory-of-mind network has been associated with making inferences about mental states and underlying character, such as whether someone is friendly, selfish, or generous. To date, however, it is largely unknown how such distinct processing components integrate neural signals. Using functional magnetic resonance imaging and connectivity analyses, we investigated the contribution of functional integration to social perception. During scanning, participants observed bodies that had previously been associated with trait-based or neutral information. Additionally, we independently localised the body perception and theory-of-mind networks. We demonstrate that when observing someone who cues the recall of stored social knowledge compared to non-social knowledge, a node in the ventral visual stream (extrastriate body area) shows greater coupling with part of the theory-of-mind network (temporal pole). These results show that functional connections provide an interface between perceptual and inferential processing components, thus providing neurobiological evidence that supports the view that understanding the visual environment involves interplay between conceptual knowledge and perceptual processing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. An Integrated Approach of Model checking and Temporal Fault Tree for System Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Koh, Kwang Yong; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2009-10-15

    Digitalization of instruments and control systems in nuclear power plants offers the potential to improve plant safety and reliability through features such as increased hardware reliability and stability, and improved failure detection capability. It however makes the systems and their safety analysis more complex. Originally, safety analysis was applied to hardware system components and formal methods mainly to software. For software-controlled or digitalized systems, it is necessary to integrate both. Fault tree analysis (FTA) which has been one of the most widely used safety analysis technique in nuclear industry suffers from several drawbacks as described in. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA.

  19. Skill Learning for Intelligent Robot by Perception-Action Integration: A View from Hierarchical Temporal Memory

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2017-01-01

    Full Text Available Skill learning autonomously through interactions with the environment is a crucial ability for intelligent robot. A perception-action integration or sensorimotor cycle, as an important issue in imitation learning, is a natural mechanism without the complex program process. Recently, neurocomputing model and developmental intelligence method are considered as a new trend for implementing the robot skill learning. In this paper, based on research of the human brain neocortex model, we present a skill learning method by perception-action integration strategy from the perspective of hierarchical temporal memory (HTM theory. The sequential sensor data representing a certain skill from a RGB-D camera are received and then encoded as a sequence of Sparse Distributed Representation (SDR vectors. The sequential SDR vectors are treated as the inputs of the perception-action HTM. The HTM learns sequences of SDRs and makes predictions of what the next input SDR will be. It stores the transitions of the current perceived sensor data and next predicted actions. We evaluated the performance of this proposed framework for learning the shaking hands skill on a humanoid NAO robot. The experimental results manifest that the skill learning method designed in this paper is promising.

  20. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  1. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  2. Integrating spatial and temporal oxygen data to improve the quantification of in situ petroleum biodegradation rates.

    Science.gov (United States)

    Davis, Gregory B; Laslett, Dean; Patterson, Bradley M; Johnston, Colin D

    2013-03-15

    Accurate estimation of biodegradation rates during remediation of petroleum impacted soil and groundwater is critical to avoid excessive costs and to ensure remedial effectiveness. Oxygen depth profiles or oxygen consumption over time are often used separately to estimate the magnitude and timeframe for biodegradation of petroleum hydrocarbons in soil and subsurface environments. Each method has limitations. Here we integrate spatial and temporal oxygen concentration data from a field experiment to develop better estimates and more reliably quantify biodegradation rates. During a nine-month bioremediation trial, 84 sets of respiration rate data (where aeration was halted and oxygen consumption was measured over time) were collected from in situ oxygen sensors at multiple locations and depths across a diesel non-aqueous phase liquid (NAPL) contaminated subsurface. Additionally, detailed vertical soil moisture (air-filled porosity) and NAPL content profiles were determined. The spatial and temporal oxygen concentration (respiration) data were modeled assuming one-dimensional diffusion of oxygen through the soil profile which was open to the atmosphere. Point and vertically averaged biodegradation rates were determined, and compared to modeled data from a previous field trial. Point estimates of biodegradation rates assuming no diffusion ranged up to 58 mg kg(-1) day(-1) while rates accounting for diffusion ranged up to 87 mg kg(-1) day(-1). Typically, accounting for diffusion increased point biodegradation rate estimates by 15-75% and vertically averaged rates by 60-80% depending on the averaging method adopted. Importantly, ignoring diffusion led to overestimation of biodegradation rates where the location of measurement was outside the zone of NAPL contamination. Over or underestimation of biodegradation rate estimates leads to cost implications for successful remediation of petroleum impacted sites. Crown Copyright © 2013. Published by Elsevier Ltd. All rights

  3. Proposal for arbitrary-order temporal integration of ultrafast optical signals using a single uniform-period fiber Bragg grating.

    Science.gov (United States)

    Asghari, Mohammad H; Azaña, José

    2008-07-01

    A simple and practical all-fiber design for implementing arbitrary-order temporal integration of ultrafast optical waveforms is proposed and numerically investigated. We demonstrate that an ultrafast photonics integrator of any desired integration order can be implemented using a uniform-period fiber Bragg grating (FBG) with a properly designed amplitude-only grating apodization profile. In particular, the grating coupling strength must vary according to the (N-1) power of the fiber distance for implementing an Nth-order photonics integrator (N=1,2,...). This approach requires the same level of practical difficulty for realizing any given integration order. The proposed integration devices operate over a limited time window, which is approximately fixed by the round-trip propagation time in the FBG. Ultrafast arbitrary-order all-optical integrators capable of accurate operation over nanosecond time windows can be implemented using readily feasible FBGs.

  4. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  5. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Effect of conductive hearing loss on central auditory function.

    Science.gov (United States)

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  7. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  9. SPATIO-TEMPORAL DATA MODEL FOR INTEGRATING EVOLVING NATION-LEVEL DATASETS

    Directory of Open Access Journals (Sweden)

    A. Sorokine

    2017-10-01

    Full Text Available Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc. and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets. Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  10. Spatio-Temporal Data Model for Integrating Evolving Nation-Level Datasets

    Science.gov (United States)

    Sorokine, A.; Stewart, R. N.

    2017-10-01

    Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.

  11. Role of DARPP-32 and ARPP-21 in the Emergence of Temporal Constraints on Striatal Calcium and Dopamine Integration

    Science.gov (United States)

    Bhalla, Upinder S.; Hellgren Kotaleski, Jeanette

    2016-01-01

    In reward learning, the integration of NMDA-dependent calcium and dopamine by striatal projection neurons leads to potentiation of corticostriatal synapses through CaMKII/PP1 signaling. In order to elicit the CaMKII/PP1-dependent response, the calcium and dopamine inputs should arrive in temporal proximity and must follow a specific (dopamine after calcium) order. However, little is known about the cellular mechanism which enforces these temporal constraints on the signal integration. In this computational study, we propose that these temporal requirements emerge as a result of the coordinated signaling via two striatal phosphoproteins, DARPP-32 and ARPP-21. Specifically, DARPP-32-mediated signaling could implement an input-interval dependent gating function, via transient PP1 inhibition, thus enforcing the requirement for temporal proximity. Furthermore, ARPP-21 signaling could impose the additional input-order requirement of calcium and dopamine, due to its Ca2+/calmodulin sequestering property when dopamine arrives first. This highlights the possible role of phosphoproteins in the temporal aspects of striatal signal transduction. PMID:27584878

  12. Design of an optical temporal integrator based on a phase-shifted fiber Bragg grating in transmission.

    Science.gov (United States)

    Quoc Ngo, Nam

    2007-10-15

    We present a theoretical study of a new application of a simple pi-phase-shifted fiber Bragg grating (PSFBG) in transmission mode as a high-speed optical temporal integrator. The PSFBG consists of two concatenated identical uniform FBGs with a pi phase shift between them. When the reflectivities of the FBGs are extremely close to 100%, the transmissive PSFBG can perform the time integral of the complex envelope of an arbitrary input optical signal with high accuracy. As an example, the integrator is numerically shown to be able to convert an input Gaussian pulse into an optical step signal.

  13. Analyzing Local Spatio-Temporal Patterns of Police Calls-for-Service Using Bayesian Integrated Nested Laplace Approximation

    Directory of Open Access Journals (Sweden)

    Hui Luan

    2016-09-01

    Full Text Available This research investigates spatio-temporal patterns of police calls-for-service in the Region of Waterloo, Canada, at a fine spatial and temporal resolution. Modeling was implemented via Bayesian Integrated Nested Laplace Approximation (INLA. Temporal patterns for two-hour time periods, spatial patterns at the small-area scale, and space-time interaction (i.e., unusual departures from overall spatial and temporal patterns were estimated. Temporally, calls-for-service were found to be lowest in the early morning (02:00–03:59 and highest in the evening (20:00–21:59, while high levels of calls-for-service were spatially located in central business areas and in areas characterized by major roadways, universities, and shopping centres. Space-time interaction was observed to be geographically dispersed during daytime hours but concentrated in central business areas during evening hours. Interpreted through the routine activity theory, results are discussed with respect to law enforcement resource demand and allocation, and the advantages of modeling spatio-temporal datasets with Bayesian INLA methods are highlighted.

  14. Processamento auditivo: comparação entre potenciais evocados auditivos de média latência e testes de padrões temporais Auditory processing: comparision between auditory middle latency response and temporal pattern tests

    Directory of Open Access Journals (Sweden)

    Eliane Schochat

    2009-06-01

    Full Text Available OBJETIVO: verificar a concordância entre os resultados da avaliação do Potencial Evocado Auditivo de Média Latência e testes de padrões temporais. MÉTODOS: foram avaliados 155 sujeitos de ambos os sexos, idade entre sete e 16 anos, com audição periférica normal. Os sujeitos foram submetidos aos testes de Padrão de Frequência e Duração e Potenciais Evocados auditivos de Média Latência. RESULTADOS: os sujeitos foram distribuídos em dois grupos: normal ou alterado para o processamento auditivo. O índice de alteração foi em torno de 30%, exceto para Potencial Evocado Auditivo de Média Latência que foi pouco menor (17,4%. Os padrões de frequência e duração foram concordantes até 12 anos. A partir dos 13 anos, observou-se maior ocorrência de alteração no padrão de frequência que no padrão de duração. Os padrões de frequência e duração (orelhas direita e esquerda e Potencial Evocado Auditivo de Média Latência não foram concordantes. Para 7 e 8 anos a combinação padrão de frequência e duração normal / Média Latência alterado tem maior ocorrência que a combinação padrão de frequência e duração alterada / Média Latência normal. Nas demais idades, ocorreu o contrário. Não houve diferença estatística entre as faixas etárias quanto à distribuição de normal e alterado no padrão de frequência (orelhas direita e esquerda, nem para o Potencial Evocado Auditivo de Média Latência, com exceção do padrão de duração para o grupo de 9 e 10 anos. CONCLUSÃO: não houve concordância entre os resultados do Potencial Evocado Auditivo de Média Latência e os testes de padrões temporais aplicados.PURPOSE: to check the concordance between the Middle Latency Response and temporal processing tests. METHODS: 155 normal hearing subjects of both genders (age group range between 7 to 16 years were evaluated with the Pitch and Duration Pattern Tests (behavioral and Middle Latency Response

  15. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  16. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  17. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  18. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  19. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  20. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  1. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    Science.gov (United States)

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  2. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  3. Multisensory speech perception without the left superior temporal sulcus.

    Science.gov (United States)

    Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S

    2012-09-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Electrophysiological and auditory behavioral evaluation of individuals with left temporal lobe epilepsy Avaliação eletrofisiológica e comportamental da audição em individuos com epilepsia em lobo temporal esquerdo

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha

    2010-02-01

    Full Text Available The purpose of this study was to determine the repercussions of left temporal lobe epilepsy (TLE for subjects with left mesial temporal sclerosis (LMTS in relation to the behavioral test-Dichotic Digits Test (DDT, event-related potential (P300, and to compare the two temporal lobes in terms of P300 latency and amplitude. We studied 12 subjects with LMTS and 12 control subjects without LMTS. Relationships between P300 latency and P300 amplitude at sites C3A1,C3A2,C4A1, and C4A2, together with DDT results, were studied in inter-and intra-group analyses. On the DDT, subjects with LMTS performed poorly in comparison to controls. This difference was statistically significant for both ears. The P300 was absent in 6 individuals with LMTS. Regarding P300 latency and amplitude, as a group, LMTS subjects presented trend toward greater P300 latency and lower P300 amplitude at all positions in relation to controls, difference being statistically significant for C3A1 and C4A2. However, it was not possible to determine laterality effect of P300 between affected and unaffected hemispheres.O objetivo deste estudo foi determinar a repercussão da epilepsia de lobo temporal esquerdo (LTE em indivíduos com esclerose mesial temporal esquerda (EMTE em relação à avaliação auditiva comportamental-Teste Dicótico de Dígitos (TDD, ao Potencial Evocado Auditivo de Longa Latência (P300 e comparar o P300 do lobo temporal esquerdo e direito. Estudamos 12 indivíduos com EMTE (grupo estudo e 12 indivíduos controle com desenvolvimento típico. Analisamos as relações entre a latência e amplitude do P300, obtidos nas posições C3A1,C3A2,C4A1 e C4A2 e os resultados obtidos no TDD. No TDD, o grupo estudo apresentou pior desempenho em relação ao grupo controle, sendo esta diferença estatisticamente significante em ambas as orelhas. Para o P300, observamos que em seis indivíduos com EMTE o potencial foi ausente. Para a latência e amplitude, verificamos que estes

  5. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. The utility of quantitative electroencephalography and Integrated Visual and Auditory Continuous Performance Test as auxiliary tools for the Attention Deficit Hyperactivity Disorder diagnosis.

    Science.gov (United States)

    Kim, JunWon; Lee, YoungSik; Han, DougHyun; Min, KyungJoon; Kim, DoHyun; Lee, ChangWon

    2015-03-01

    This study investigated the clinical utility of quantitative electroencephalography (QEEG) and the Integrated Visual and Auditory Continuous Performance Test (IVA+CPT) as auxiliary tools for assessing Attention Deficit Hyperactivity Disorder (ADHD). All of 157 subjects were assessed using the Korean version of the Diagnostic Interview Schedule for Children Version IV (DISC-IV). We measured EGG absolute power in 21 channels and conducted IVA+CPT. We analyzed QEEG according to the Hz range: delta (1-4Hz), theta (4-8Hz), slow alpha (8-10Hz), fast alpha (10-13.5Hz), and beta (13.5-30Hz). To remove artifacts, independent component analysis was conducted (ICA), and the tester confirmed the results again. All of the IVA+CPT quotients showed significant differences between the ADHD and control groups. The ADHD group showed significantly increased delta and theta activity compared with the control group. The z-scores of theta were negatively correlated with the scores of IVA+CPT in ADHD combined type, and those of beta were positively correlated. IVA+CPT and QEEG significantly discriminated between ADHD and control groups. The commission error of IVA+CPT showed an accuracy of 82.1%, and the omission error of IVA+CPT showed an accuracy of 78.6%. The IVA+CPT and QEEG are expected to be valuable tools for aiding ADHD diagnosis accurately. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  8. Representing Representation: Integration between the Temporal Lobe and the Posterior Cingulate Influences the Content and Form of Spontaneous Thought.

    Directory of Open Access Journals (Sweden)

    Jonathan Smallwood

    Full Text Available When not engaged in the moment, we often spontaneously represent people, places and events that are not present in the environment. Although this capacity has been linked to the default mode network (DMN, it remains unclear how interactions between the nodes of this network give rise to particular mental experiences during spontaneous thought. One hypothesis is that the core of the DMN integrates information from medial and lateral temporal lobe memory systems, which represent different aspects of knowledge. Individual differences in the connectivity between temporal lobe regions and the default mode network core would then predict differences in the content and form of people's spontaneous thoughts. This study tested this hypothesis by examining the relationship between seed-based functional connectivity and the contents of spontaneous thought recorded in a laboratory study several days later. Variations in connectivity from both medial and lateral temporal lobe regions was associated with different patterns of spontaneous thought and these effects converged on an overlapping region in the posterior cingulate cortex. We propose that the posterior core of the DMN acts as a representational hub that integrates information represented in medial and lateral temporal lobe and this process is important in determining the content and form of spontaneous thought.

  9. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  10. On the relations among temporal integration for loudness, loudness discrimination, and the form of the loudness function. (A)

    DEFF Research Database (Denmark)

    Poulsen, Torben; Buus, Søren; Florentine, M

    1996-01-01

    Temporal integration for loudness was measured as a function of level from 2 to 60 dB SL using 2-, 10-, 50-, and 250-ms tones at 5 kHz. The adaptive 2I,2AFC procedure converged at the level required to make the variable stimulus just louder than the fixed stimulus. Thus the data yield estimates...... of the levels required to make tones of different durations equally loud and of the just noticeable differences for loudness level. Results for four listeners with normal hearing show that the amount of temporal integration, defined as the level difference between equally loud short and long tones, varies...... markedly with level and is largest at moderate levels. The effect of level increases as the duration of the short stimulus decreases and is largest for comparisons between the 2- and 250-ms tones. The loudness-level jnds are also largest at moderate levels and, contrary to traditional jnds for the level...

  11. Integrating cross-scale analysis in the spatial and temporal domains for classification of behavioral movement

    Directory of Open Access Journals (Sweden)

    Ali Soleymani

    2014-06-01

    Full Text Available Since various behavioral movement patterns are likely to be valid within different, unique ranges of spatial and temporal scales (e.g., instantaneous, diurnal, or seasonal with the corresponding spatial extents, a cross-scale approach is needed for accurate classification of behaviors expressed in movement. Here, we introduce a methodology for the characterization and classification of behavioral movement data that relies on computing and analyzing movement features jointly in both the spatial and temporal domains. The proposed methodology consists of three stages. In the first stage, focusing on the spatial domain, the underlying movement space is partitioned into several zonings that correspond to different spatial scales, and features related to movement are computed for each partitioning level. In the second stage, concentrating on the temporal domain, several movement parameters are computed from trajectories across a series of temporal windows of increasing sizes, yielding another set of input features for the classification. For both the spatial and the temporal domains, the ``reliable scale'' is determined by an automated procedure. This is the scale at which the best classification accuracy is achieved, using only spatial or temporal input features, respectively. The third stage takes the measures from the spatial and temporal domains of movement, computed at the corresponding reliable scales, as input features for behavioral classification. With a feature selection procedure, the most relevant features contributing to known behavioral states are extracted and used to learn a classification model. The potential of the proposed approach is demonstrated on a dataset of adult zebrafish (Danio rerio swimming movements in testing tanks, following exposure to different drug treatments. Our results show that behavioral classification accuracy greatly increases when firstly cross-scale analysis is used to determine the best analysis scale, and

  12. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  13. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  14. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  15. Speech processing: from peripheral to hemispheric asymmetry of the auditory system.

    Science.gov (United States)

    Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier

    2012-01-01

    Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  16. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  17. Design of all-optical high-order temporal integrators based on multiple-phase-shifted Bragg gratings.

    Science.gov (United States)

    Asghari, Mohammad H; Azaña, José

    2008-07-21

    In exact analogy with their electronic counterparts, photonic temporal integrators are fundamental building blocks for constructing all-optical circuits for ultrafast information processing and computing. In this work, we introduce a simple and general approach for realizing all-optical arbitrary-order temporal integrators. We demonstrate that the N(th) cumulative time integral of the complex field envelope of an input optical waveform can be obtained by simply propagating this waveform through a single uniform fiber/waveguide Bragg grating (BG) incorporating N pi-phase shifts along its axial profile. We derive here the design specifications of photonic integrators based on multiple-phase-shifted BGs. We show that the phase shifts in the BG structure can be arbitrarily located along the grating length provided that each uniform grating section (sections separated by the phase shifts) is sufficiently long so that its associated peak reflectivity reaches nearly 100%. The resulting designs are demonstrated by numerical simulations assuming all-fiber implementations. Our simulations show that the proposed approach can provide optical operation bandwidths in the tens-of-GHz regime using readily feasible photo-induced fiber BG structures.

  18. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  19. Geo-Parcel Based Crop Identification by Integrating High Spatial-Temporal Resolution Imagery from Multi-Source Satellite Data

    Directory of Open Access Journals (Sweden)

    Yingpin Yang

    2017-12-01

    Full Text Available Geo-parcel based crop identification plays an important role in precision agriculture. It meets the needs of refined farmland management. This study presents an improved identification procedure for geo-parcel based crop identification by combining fine-resolution images and multi-source medium-resolution images. GF-2 images with fine spatial resolution of 0.8 m provided agricultural farming plot boundaries, and GF-1 (16 m and Landsat 8 OLI data were used to transform the geo-parcel based enhanced vegetation index (EVI time-series. In this study, we propose a piecewise EVI time-series smoothing method to fit irregular time profiles, especially for crop rotation situations. Global EVI time-series were divided into several temporal segments, from which phenological metrics could be derived. This method was applied to Lixian, where crop rotation was the common practice of growing different types of crops, in the same plot, in sequenced seasons. After collection of phenological features and multi-temporal spectral information, Random Forest (RF was performed to classify crop types, and the overall accuracy was 93.27%. Moreover, an analysis of feature significance showed that phenological features were of greater importance for distinguishing agricultural land cover compared to temporal spectral information. The identification results indicated that the integration of high spatial-temporal resolution imagery is promising for geo-parcel based crop identification and that the newly proposed smoothing method is effective.

  20. Neurogenetics and auditory processing in developmental dyslexia.

    Science.gov (United States)

    Giraud, Anne-Lise; Ramus, Franck

    2013-02-01

    Dyslexia is a polygenic developmental reading disorder characterized by an auditory/phonological deficit. Based on the latest genetic and neurophysiological studies, we propose a tentative model in which phonological deficits could arise from genetic anomalies of the cortical micro-architecture in the temporal lobe. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. An individual differences approach to temporal integration and order reversals in the attentional blink task

    NARCIS (Netherlands)

    Willems, Charlotte; Saija, Jefta D.; Akyurek, Elkan G.; Martens, Sander

    2016-01-01

    Background The reduced ability to identify a second target when it is presented in close temporal succession of a first target is called the attentional blink (AB). Studies have shown large individual differences in AB task performance, where lower task performance has been associated with more

  2. When Spatial and Temporal Contiguities Help the Integration in Working Memory: "A Multimedia Learning" Approach

    Science.gov (United States)

    Mammarella, Nicola; Fairfield, Beth; Di Domenico, Alberto

    2013-01-01

    Two experiments examined the effects of spatial and temporal contiguities in a working memory binding task that required participants to remember coloured objects. In Experiment 1, a black and white drawing and a corresponding phrase that indicated its colour perceptually were either near or far (spatial study condition), while in Experiment 2,…

  3. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...... of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...

  4. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  5. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. The fusion of mental imagery and sensation in the temporal association cortex.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2014-10-08

    It is well understood that the brain integrates information that is provided to our different senses to generate a coherent multisensory percept of the world around us (Stein and Stanford, 2008), but how does the brain handle concurrent sensory information from our mind and the external world? Recent behavioral experiments have found that mental imagery--the internal representation of sensory stimuli in one's mind--can also lead to integrated multisensory perception (Berger and Ehrsson, 2013); however, the neural mechanisms of this process have not yet been explored. Here, using functional magnetic resonance imaging and an adapted version of a well known multisensory illusion (i.e., the ventriloquist illusion; Howard and Templeton, 1966), we investigated the neural basis of mental imagery-induced multisensory perception in humans. We found that simultaneous visual mental imagery and auditory stimulation led to an illusory translocation of auditory stimuli and was associated with increased activity in the left superior temporal sulcus (L. STS), a key site for the integration of real audiovisual stimuli (Beauchamp et al., 2004a, 2010; Driver and Noesselt, 2008; Ghazanfar et al., 2008; Dahl et al., 2009). This imagery-induced ventriloquist illusion was also associated with increased effective connectivity between the L. STS and the auditory cortex. These findings suggest an important role of the temporal association cortex in integrating imagined visual stimuli with real auditory stimuli, and further suggest that connectivity between the STS and auditory cortex plays a modulatory role in spatially localizing auditory stimuli in the presence of imagined visual stimuli. Copyright © 2014 the authors 0270-6474/14/3313684-09$15.00/0.

  7. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    KAUST Repository

    Serag, Maged F.

    2014-10-06

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.

  8. Single-molecule diffusion and conformational dynamics by spatial integration of temporal fluctuations

    KAUST Repository

    Serag, Maged F.; Abadi, Maram; Habuchi, Satoshi

    2014-01-01

    Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.

  9. Land Cover Classification Using Integrated Spectral, Temporal, and Spatial Features Derived from Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    Yongguang Zhai

    2018-03-01

    Full Text Available Obtaining accurate and timely land cover information is an important topic in many remote sensing applications. Using satellite image time series data should achieve high-accuracy land cover classification. However, most satellite image time-series classification methods do not fully exploit the available data for mining the effective features to identify different land cover types. Therefore, a classification method that can take full advantage of the rich information provided by time-series data to improve the accuracy of land cover classification is needed. In this paper, a novel method for time-series land cover classification using spectral, temporal, and spatial information at an annual scale was introduced. Based on all the available data from time-series remote sensing images, a refined nonlinear dimensionality reduction method was used to extract the spectral and temporal features, and a modified graph segmentation method was used to extract the spatial features. The proposed classification method was applied in three study areas with land cover complexity, including Illinois, South Dakota, and Texas. All the Landsat time series data in 2014 were used, and different study areas have different amounts of invalid data. A series of comparative experiments were conducted on the annual time-series images using training data generated from Cropland Data Layer. The results demonstrated higher overall and per-class classification accuracies and kappa index values using the proposed spectral-temporal-spatial method compared to spectral-temporal classification methods. We also discuss the implications of this study and possibilities for future applications and developments of the method.

  10. Integration, Provenance, and Temporal Queries for Large-Scale Knowledge Bases

    OpenAIRE

    Gao, Shi

    2016-01-01

    Knowledge bases that summarize web information in RDF triples deliver many benefits, including support for natural language question answering and powerful structured queries that extract encyclopedic knowledge via SPARQL. Large scale knowledge bases grow rapidly in terms of scale and significance, and undergo frequent changes in both schema and content. Two critical problems have thus emerged: (i) how to support temporal queries that explore the history of knowledge bases or flash-back to th...

  11. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  12. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  13. Visual and auditory reaction time for air traffic controllers using quantitative electroencephalograph (QEEG) data.

    Science.gov (United States)

    Abbass, Hussein A; Tang, Jiangjun; Ellejmi, Mohamed; Kirby, Stephen

    2014-12-01

    The use of quantitative electroencephalograph in the analysis of air traffic controllers' performance can reveal with a high temporal resolution those mental responses associated with different task demands. To understand the relationship between visual and auditory correct responses, reaction time, and the corresponding brain areas and functions, air traffic controllers were given an integrated visual and auditory continuous reaction task. Strong correlations were found between correct responses to the visual target and the theta band in the frontal lobe, the total power in the medial of the parietal lobe and the theta-to-beta ratio in the left side of the occipital lobe. Incorrect visual responses triggered activations in additional bands including the alpha band in the medial of the frontal and parietal lobes, and the Sensorimotor Rhythm in the medial of the parietal lobe. Controllers' responses to visual cues were found to be more accurate but slower than their corresponding performance on auditory cues. These results suggest that controllers are more susceptible to overload when more visual cues are used in the air traffic control system, and more errors are pruned as more auditory cues are used. Therefore, workload studies should be carried out to assess the usefulness of additional cues and their interactions with the air traffic control environment.

  14. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  15. Integrated single grating compressor for variable pulse front tilt in simultaneously spatially and temporally focused systems.

    Science.gov (United States)

    Block, Erica; Thomas, Jens; Durfee, Charles; Squier, Jeff

    2014-12-15

    A Ti:Al(3)O(2) multipass chirped pulse amplification system is outfitted with a single-grating, simultaneous spatial and temporal focusing (SSTF) compressor platform. For the first time, this novel design has the ability to easily vary the beam aspect ratio of an SSTF beam, and thus the degree of pulse-front tilt at focus, while maintaining a net zero-dispersion system. Accessible variation of pulse front tilt gives full spatiotemporal control over the intensity distribution at the focus and could lead to better understanding of effects such as nonreciprocal writing and SSTF-material interactions.

  16. Impact of Auditory Integrative Training on Transforming Growth Factor-β1 and Its Effect on Behavioral and Social Emotions in Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Al-Ayadhi, Laila; Alhowikan, Abdulrahman Mohammed; Halepoto, Dost Muhammad

    2018-01-01

    To explore the impact of auditory integrative training (AIT) on the inflammatory biomarker transforming growth factor (TGF)-β1 and to assess its effect on social behavior in children with autism spectrum disorder (ASD). In this cross-sectional study, 15 patients (14 males and 1 female) with ASD aged 3-12 years were recruited. All were screened for autism using the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). Plasma levels of TGF-β1 were measured in all patients using a sandwich enzyme-linked immunoassay (ELISA) immediately and 1 and 3 months after the AIT sessions. Pre- and post-AIT behavioral scores were also calculated for each child using the Childhood Autism Rating Scale (CARS), the Social Responsiveness Scale (SRS), and the Short Sensory Profile (SSP). Data were analyzed using the Statistical Package for the Social Sciences (SPSS 21.0 for Windows). Plasma levels of TGF-β1 significantly increased to 85% immediately after AIT (20.13 ± 12 ng/mL, p < 0.05), to 95% 1 month after AIT (21.2 ± 11 ng/mL, p < 0.01), and to 105% 3 months after AIT (22.25 ± 16 ng/mL, p < 0.01) compared to before AIT (10.85 ± 8 ng/mL). Results also revealed that behavioral rating scales (CARS, SRS, and SSP) improved in terms of disease severity after AIT. Increased plasma levels of TGF-β1 support the therapeutic effect of AIT on TGF-β1 followed by improvement in social awareness, social cognition, and social communication in children with ASD. Furthermore, TGF-β1 was associated with severity in all scores tested (CARS, SRS, and SSP); if confirmed in studies with larger sample sizes, TGF-β1 may be considered as a marker of ASD severity and to assess the efficacy of therapeutic interventions. © 2018 The Author(s) Published by S. Karger AG, Basel.

  17. Foster home integration as a temporal indicator of relational well-being.

    Science.gov (United States)

    Waid, Jeffrey; Kothari, Brianne H; McBeath, Bowen M; Bank, Lew

    2017-12-01

    This study sought to identify factors that contribute to the relational well-being of youth in substitute care. Using data from the [BLIND] study, youth responded to a 9-item measure of positive home integration, a scale designed to assess the relational experiences of youth to their caregivers and their integration into the foster home. Data were collected from youth in six month intervals, for an 18-month period of time. Latent growth curve modeling procedures were employed to determine if child, family, and case characteristics influenced youth's home integration trajectories. Results suggest stability in youth reports of home integration over time; however, children who were older at the time of study enrollment and youth who experienced placement changes during the period of observation experienced decreased home integration during the 18-month period. Results suggest youth's perspectives of home integration may in part be a function of the child's developmental stage and their experiences with foster care placement instability. Implications for practice and future research are discussed.

  18. The use of listening devices to ameliorate auditory deficit in children with autism.

    Science.gov (United States)

    Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna

    2014-02-01

    To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P listening devices can enhance speech perception in noise, aid social interaction, and improve educational outcomes in children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.

  19. Data Integration for Spatio-Temporal Patterns of Gene Expression of Zebrafish development: the GEMS database

    Directory of Open Access Journals (Sweden)

    Belmamoune Mounia

    2008-06-01

    Full Text Available The Gene Expression Management System (GEMS is a database system for patterns of gene expression. These patterns result from systematic whole-mount fluorescent in situ hybridization studies on zebrafish embryos. GEMS is an integrative platform that addresses one of the important challenges of developmental biology: how to integrate genetic data that underpin morphological changes during embryogenesis. Our motivation to build this system was by the need to be able to organize and compare multiple patterns of gene expression at tissue level. Integration with other developmental and biomolecular databases will further support our understanding of development. The GEMS operates in concert with a database containing a digital atlas of zebrafish embryo; this digital atlas of zebrafish development has been conceived prior to the expansion of the GEMS. The atlas contains 3D volume models of canonical stages of zebrafish development in which in each volume model element is annotated with an anatomical term. These terms are extracted from a formal anatomical ontology, i.e. the Developmental Anatomy Ontology of Zebrafish (DAOZ. In the GEMS, anatomical terms from this ontology together with terms from the Gene Ontology (GO are also used to annotate patterns of gene expression and in this manner providing mechanisms for integration and retrieval . The annotations are the glue for integration of patterns of gene expression in GEMS as well as in other biomolecular databases. At the one hand, zebrafish anatomy terminology allows gene expression data within GEMS to be integrated with phenotypical data in the 3D atlas of zebrafish development. At the other hand, GO terms extend GEMS expression patterns integration to a wide range of bioinformatics resources.

  20. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    Science.gov (United States)

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  1. Genetics Home Reference: autosomal dominant partial epilepsy with auditory features

    Science.gov (United States)

    ... for This Condition ADLTE ADPEAF Autosomal dominant lateral temporal lobe epilepsy Epilepsy, partial, with auditory features ETL1 Related Information ... W, Nakken KO, Fischer C, Steinlein OK. Familial temporal lobe epilepsy with aphasic seizures and linkage to chromosome 10q22- ...

  2. Predicting spatial and temporal gene expression using an integrative model of transcription factor occupancy and chromatin state.

    Directory of Open Access Journals (Sweden)

    Bartek Wilczynski

    Full Text Available Precise patterns of spatial and temporal gene expression are central to metazoan complexity and act as a driving force for embryonic development. While there has been substantial progress in dissecting and predicting cis-regulatory activity, our understanding of how information from multiple enhancer elements converge to regulate a gene's expression remains elusive. This is in large part due to the number of different biological processes involved in mediating regulation as well as limited availability of experimental measurements for many of them. Here, we used a Bayesian approach to model diverse experimental regulatory data, leading to accurate predictions of both spatial and temporal aspects of gene expression. We integrated whole-embryo information on transcription factor recruitment to multiple cis-regulatory modules, insulator binding and histone modification status in the vicinity of individual gene loci, at a genome-wide scale during Drosophila development. The model uses Bayesian networks to represent the relation between transcription factor occupancy and enhancer activity in specific tissues and stages. All parameters are optimized in an Expectation Maximization procedure providing a model capable of predicting tissue- and stage-specific activity of new, previously unassayed genes. Performing the optimization with subsets of input data demonstrated that neither enhancer occupancy nor chromatin state alone can explain all gene expression patterns, but taken together allow for accurate predictions of spatio-temporal activity. Model predictions were validated using the expression patterns of more than 600 genes recently made available by the BDGP consortium, demonstrating an average 15-fold enrichment of genes expressed in the predicted tissue over a naïve model. We further validated the model by experimentally testing the expression of 20 predicted target genes of unknown expression, resulting in an accuracy of 95% for temporal

  3. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  5. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  6. Audio-Tactile Integration and the Influence of Musical Training

    OpenAIRE

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C.; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at function...

  7. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  8. Cerebrospinal otorrhoea--a temporal bone report.

    Science.gov (United States)

    Walby, A P

    1988-05-01

    Spontaneous cerebrospinal otorrhoea is a rare complication of a cholesteatoma. The histological findings in a temporal bone from such a case are reported. The cholesteatoma had eroded deeply through the vestibule into the internal auditory meatus.

  9. Asymmetric Temporal Integration of Layer 4 and Layer 2/3 Inputs in Visual Cortex

    OpenAIRE

    Hang, Giao B.; Dan, Yang

    2010-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices...

  10. Integrating spatial, temporal, and size probabilities for the annual landslide hazard maps in the Shihmen watershed, Taiwan

    Directory of Open Access Journals (Sweden)

    C. Y. Wu

    2013-09-01

    Full Text Available Landslide spatial, temporal, and size probabilities were used to perform a landslide hazard assessment in this study. Eleven intrinsic geomorphological, and two extrinsic rainfall factors were evaluated as landslide susceptibility related factors as they related to the success rate curves, landslide ratio plots, frequency distributions of landslide and non-landslide groups, as well as probability–probability plots. Data on landslides caused by Typhoon Aere in the Shihmen watershed were selected to train the susceptibility model. The landslide area probability, based on the power law relationship between the landslide area and a noncumulative number, was analyzed using the Pearson type 5 probability density function. The exceedance probabilities of rainfall with various recurrence intervals, including 2, 5, 10, 20, 50, 100 and 200 yr, were used to determine the temporal probabilities of the events. The study was conducted in the Shihmen watershed, which has an area of 760 km2 and is one of the main water sources for northern Taiwan. The validation result of Typhoon Krosa demonstrated that this landslide hazard model could be used to predict the landslide probabilities. The results suggested that integration of spatial, area, and exceedance probabilities to estimate the annual probability of each slope unit is feasible. The advantage of this annual landslide probability model lies in its ability to estimate the annual landslide risk, instead of a scenario-based risk.

  11. Spatial and temporal movements in Pyrenean bearded vultures (Gypaetus barbatus): Integrating movement ecology into conservation practice.

    Science.gov (United States)

    Margalida, Antoni; Pérez-García, Juan Manuel; Afonso, Ivan; Moreno-Opo, Rubén

    2016-10-25

    Understanding the movement of threatened species is important if we are to optimize management and conservation actions. Here, we describe the age and sex specific spatial and temporal ranging patterns of 19 bearded vultures Gypaetus barbatus tracked with GPS technology. Our findings suggest that spatial asymmetries are a consequence of breeding status and age-classes. Territorial individuals exploited home ranges of about 50 km 2 , while non-territorial birds used areas of around 10 000 km 2 (with no seasonal differences). Mean daily movements differed between territorial (23.8 km) and non-territorial birds (46.1 km), and differences were also found between sexes in non-territorial birds. Daily maximum distances travelled per day also differed between territorial (8.2 km) and non-territorial individuals (26.5 km). Territorial females moved greater distances (12 km) than males (6.6 km). Taking into account high-use core areas (K20), Supplementary Feeding Sites (SFS) do not seem to play an important role in the use of space by bearded vultures. For non-territorial and territorial individuals, 54% and 46% of their home ranges (K90), respectively, were outside protected areas. Our findings will help develop guidelines for establishing priority areas based on spatial use, and also optimize management and conservation actions for this threatened species.

  12. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high......When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...

  13. Distraction by deviance: comparing the effects of auditory and visual deviant stimuli on auditory and visual target processing.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-01-01

    We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.

  14. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  15. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  16. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  17. Coding space-time stimulus dynamics in auditory brain maps

    Directory of Open Access Journals (Sweden)

    Yunyan eWang

    2014-04-01

    Full Text Available Sensory maps are often distorted representations of the environment, where ethologically-important ranges are magnified. The implication of a biased representation extends beyond increased acuity for having more neurons dedicated to a certain range. Because neurons are functionally interconnected, non-uniform representations influence the processing of high-order features that rely on comparison across areas of the map. Among these features are time-dependent changes of the auditory scene generated by moving objects. How sensory representation affects high order processing can be approached in the map of auditory space of the owl’s midbrain, where locations in the front are over-represented. In this map, neurons are selective not only to location but also to location over time. The tuning to space over time leads to direction selectivity, which is also topographically organized. Across the population, neurons tuned to peripheral space are more selective to sounds moving into the front. The distribution of direction selectivity can be explained by spatial and temporal integration on the non-uniform map of space. Thus, the representation of space can induce biased computation of a second-order stimulus feature. This phenomenon is likely observed in other sensory maps and may be relevant for behavior.

  18. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  19. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  20. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  1. Integrating environmental equity, energy and sustainability: A spatial-temporal study of electric power generation

    Science.gov (United States)

    Touche, George Earl

    The theoretical scope of this dissertation encompasses the ecological factors of equity and energy. Literature important to environmental justice and sustainability are reviewed, and a general integration of global concepts is delineated. The conceptual framework includes ecological integrity, quality human development, intra- and inter-generational equity and risk originating from human economic activity and modern energy production. The empirical focus of this study concentrates on environmental equity and electric power generation within the United States. Several designs are employed while using paired t-tests, independent t-tests, zero-order correlation coefficients and regression coefficients to test seven sets of hypotheses. Examinations are conducted at the census tract level within Texas and at the state level across the United States. At the community level within Texas, communities that host coal or natural gas utility power plants and corresponding comparison communities that do not host such power plants are tested for compositional differences. Comparisons are made both before and after the power plants began operating for purposes of assessing outcomes of the siting process and impacts of the power plants. Relationships between the compositions of the hosting communities and the risks and benefits originating from the observed power plants are also examined. At the statewide level across the United States, relationships between statewide composition variables and risks and benefits originating from statewide electric power generation are examined. Findings indicate the existence of some limited environmental inequities, but they do not indicate disparities that confirm the general thesis of environmental racism put forth by environmental justice advocates. Although environmental justice strategies that would utilize Title VI of the 1964 Civil Rights Act and the disparate impact standard do not appear to be applicable, some findings suggest potential

  2. Assessing temporal uncertainties in integrated groundwater management: an opportunity for change?

    Science.gov (United States)

    Anglade, J. A.; Billen, G.; Garnier, J.

    2013-12-01

    Since the early 1990's, high levels of nitrates concentration (occasionally exceeding the European drinking standard of 50 mgNO3-/l) have been recorded in the borewells supplying Auxerres's 60.000 inhabitants water requirements. The water catchment area (86 km2) is located in a rural area dedicated to field crops production in intensive cereal farming systems based on massive inputs of synthetic fertilizers. In 1998, a co-management committee comprising Auxerre City, rural municipalities located in the water catchment area, consumers and farmers, was created as a forward-looking associative structure to achieve integrated, adaptive and sustainable management of the resource. In 2002, 18 years after the first signs of water quality degradation, multiparty negotiation led to a cooperative agreement, a contribution to assist farmers toward new practices (optimized application of fertilizers, catch crops, and buffer strips) in a form of a surcharge on consumers' water bills. The management strategy initially integrated and operating on a voluntary basis, did not rapidly deliver its promises (there was no significant decrease in the nitrates concentration). It evolved into a combination of short term palliative solutions, contractual and regulatory instruments with higher requirements. The establishment of a regulatory framework caused major tensions between stakeholders that brought about a feeling of discouragement and a lack of understanding as to the absence of results on water quality after 20 years of joint actions. At this point, the urban-rural solidarity was in danger in being undermined, so the time issue, i.e the delay between agricultural pressure changes and visible effects on water quality, was scientifically addressed and communicated to all the parties involved. First, water age dating analysis through CFC and SF6 (anthropic gas) coupled with a statistical long term analysis of agricultural evolutions revealed a residence time in the Sequanian limestones

  3. Trends and Opportunities of BIM-GIS Integration in the Architecture, Engineering and Construction Industry: A Review from a Spatio-Temporal Statistical Perspective

    Directory of Open Access Journals (Sweden)

    Yongze Song

    2017-12-01

    Full Text Available The integration of building information modelling (BIM and geographic information system (GIS in construction management is a new and fast developing trend in recent years, from research to industrial practice. BIM has advantages on rich geometric and semantic information through the building life cycle, while GIS is a broad field covering geovisualization-based decision making and geospatial modelling. However, most current studies of BIM-GIS integration focus on the integration techniques but lack theories and methods for further data analysis and mathematic modelling. This paper reviews the applications and discusses future trends of BIM-GIS integration in the architecture, engineering and construction (AEC industry based on the studies of 96 high-quality research articles from a spatio-temporal statistical perspective. The analysis of these applications helps reveal the evolution progress of BIM-GIS integration. Results show that the utilization of BIM-GIS integration in the AEC industry requires systematic theories beyond integration technologies and deep applications of mathematical modeling methods, including spatio-temporal statistical modeling in GIS and 4D/nD BIM simulation and management. Opportunities of BIM-GIS integration are outlined as three hypotheses in the AEC industry for future research on the in-depth integration of BIM and GIS. BIM-GIS integration hypotheses enable more comprehensive applications through the life cycle of AEC projects.

  4. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  5. Integration of multi-temporal airborne and terrestrial laser scanning data for the analysis and modelling of proglacial geomorphodynamic processes

    Science.gov (United States)

    Briese, Christian; Glira, Philipp; Pfeifer, Norbert

    2013-04-01

    The actual on-going and predicted climate change leads in sensitive areas like in high-mountain proglacial regions to significant geomorphodynamic processes (e.g. landslides). Within a short time period (even less than a year) these processes lead to a substantial change of the landscape. In order to study and analyse the recent changes in a proglacial environment the multi-disciplinary research project PROSA (high-resolution measurements of morphodynamics in rapidly changing PROglacial Systems of the Alps) selected the study area of the Gepatschferner (Tyrol), the second largest glacier in Austria. One of the challenges within the project is the geometric integration (i.e. georeferencing) of multi-temporal topographic data sets in a continuously changing environment. Furthermore, one has to deal with data sets of multiple scales (large area data sets vs. highly detailed local area observations) that are on one hand necessary to cover the complete proglacial area with the whole catchment and on the other hand guaranty a highly dense and accurate sampling of individual areas of interest (e.g. a certain highly affected slope). This contribution suggests a comprehensive method for the georeferencing of multi-temporal airborne and terrestrial laser scanning (ALS resp. TLS). It is studied by application to the data that was acquired within the project PROSA. In a first step a stable coordinate frame that allows the analysis of the changing environment has to be defined. Subsequently procedures for the transformation of the individual ALS and TLS data sets into this coordinate frame were developed. This includes the selection of appropriate reference areas as well as the development of special targets for the local TLS acquisition that can be used for the absolute georeferencing in the common coordinate frame. Due to the fact that different TLS instruments can be used (some larger distance sensors that allow covering larger areas vs. closer operating sensors that allow a

  6. Temporal dynamics of sensorimotor integration in speech perception and production: Independent component analysis of EEG data

    Directory of Open Access Journals (Sweden)

    David eJenson

    2014-07-01

    Full Text Available Activity in premotor and sensorimotor cortices is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20Hz and alpha (~10Hz spectral power within the EEG µ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different of syllables pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to 1 identify clusters of µ components common to all conditions and 2 examine real-time event-related spectral perturbations (ERSP within alpha and beta bands. 17 and 15 out of 20 participants produced left and right µ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR<.05 early alpha event-related synchronization (ERS prior to and during stimulus presentation and later alpha event-related desynchronization (ERD following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that µ-beta ERD indexes early predictive coding (e.g., internal modeling and/or overt and covert attentional / motor processes. µ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while µ-alpha ERD may index re-afferent sensory feedback during speech rehearsal and production.

  7. The Temporal Dynamics of Feature Integration for Color, form, and Motion

    Directory of Open Access Journals (Sweden)

    KS Pilz

    2012-07-01

    Full Text Available When two similar visual stimuli are presented in rapid succession, only their fused image is perceived, without having conscious access to the single stimuli. Such feature fusion occurs both for color (eg, Efron1973 and form (eg, Scharnowski et al 2007. For verniers, the fusion process lasts for more than 400 ms, as has been shown using TMS (Scharnowski et al 2009. In three experiments, we used light masks to investigate the time course of feature fusion for color, form, and motion. In experiment one, two verniers were presented in rapid succession with opposite offset directions. Subjects had to indicate the offset direction of the vernier. In a second experiment, a red and a green disk were presented in rapid succession, and subjects had to indicate whether the fused, yellow disk appeared rather than red or green. In a third experiment, three frames of random dots were presented successively. The first two frames created a percept of apparent motion to the upper right; and the last two frames, one to the upper left or vice versa. Subjects had to indicate the direction of motion. All stimuli were presented foveally. In all three experiments, we first balanced performance so that neither the first nor the second stimulus dominated the fused percept. In a second step, a light mask was presented either before, during, or after stimulus presentation. Depending on presentation time, the light masks modulated the fusion process so that either the first or the second stimulus dominated the percept. Our results show that unconscious feature fusion lasts more than five times longer than the actual stimulus duration, which indicates that individual features are stored for a substantial amount of time before they are integrated.

  8. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    Science.gov (United States)

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  9. Anteverted internal auditory canal as an inner ear anomaly in patients with craniofacial microsomia.

    Science.gov (United States)

    L'Heureux-Lebeau, Bénédicte; Saliba, Issam

    2014-09-01

    Craniofacial microsomia involves structure of the first and second branchial arches. A wide range of ear anomalies, affecting external, middle and inner ear, has been described in association with this condition. We report three cases of anteverted internal auditory canal in patients presenting craniofacial microsomia. This unique internal auditory canal orientation was found on high-resolution computed tomography of the temporal bones. This internal auditory canal anomaly is yet unreported in craniofacial anomalies. Copyright © 2014. Published by Elsevier Ireland Ltd.

  10. Validation of a station-prototype designed to integrate temporally soil N2O fluxes: IPNOA Station prototype.

    Science.gov (United States)

    Laville, Patricia; Volpi, Iride; Bosco, Simona; Virgili, Giorgio; Neri, Simone; Continanza, Davide; Bonari, Enrico

    2016-04-01

    Nitrous oxide (N2O) flux measurements from agricultural soil surface still accounts for the scientific community as major challenge. The evaluations of integrated soil N2O fluxes are difficult because these emissions are lower than for the other greenhouse gases sources (CO2, CH4). They are also sporadic, because highly dependent on few environmental conditions acting as limiting factors. Within a LIFE project (IPNOA: LIFE11 ENV/IT/00032) a station prototype was developed to integrate annually N2O and CO2 emissions using automatically chamber technique. Main challenge was to develop a device enough durable to be able of measuring in continuous way CO2 and N2O fluxes with sufficient sensitivity to allow make reliable assessments of soil GHG measurements with minimal technical field interventions. The IPNOA station prototype was developed by West System SRL and was set up during 2 years (2014 -2015) in an experimental maize field in Tuscan. The prototype involved six automatic chambers; the complete measurement cycle was of 2 hours. Each chamber was closing during 20 min and biogas accumulations were monitoring in line with IR spectrometers. Auxiliary's measurements including soil temperatures and water contents as weather data were also monitoring. All data were managed remotely with the same acquisition software installed in the prototype control unit. The operation of the prototype during the two cropping years allowed testing its major features: its ability to evaluate the temporal variation of N2O soil fluxes during a long period with weather conditions and agricultural managements and to prove the interest to have continuous measurements of fluxes. The temporal distribution of N2O fluxes indicated that emissions can be very large and discontinuous over short periods less ten days and that during about 70% of the time N2O fluxes were around detection limit of the instrumentation, evaluated to 2 ng N ha-1 day-1. N2O emission factor assessments were 1.9% in 2014

  11. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  12. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    Science.gov (United States)

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  13. Audio-tactile integration and the influence of musical training.

    Directory of Open Access Journals (Sweden)

    Anja Kuchenbuch

    Full Text Available Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  14. Audio-tactile integration and the influence of musical training.

    Science.gov (United States)

    Kuchenbuch, Anja; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Pantev, Christo

    2014-01-01

    Perception of our environment is a multisensory experience; information from different sensory systems like the auditory, visual and tactile is constantly integrated. Complex tasks that require high temporal and spatial precision of multisensory integration put strong demands on the underlying networks but it is largely unknown how task experience shapes multisensory processing. Long-term musical training is an excellent model for brain plasticity because it shapes the human brain at functional and structural levels, affecting a network of brain areas. In the present study we used magnetoencephalography (MEG) to investigate how audio-tactile perception is integrated in the human brain and if musicians show enhancement of the corresponding activation compared to non-musicians. Using a paradigm that allowed the investigation of combined and separate auditory and tactile processing, we found a multisensory incongruency response, generated in frontal, cingulate and cerebellar regions, an auditory mismatch response generated mainly in the auditory cortex and a tactile mismatch response generated in frontal and cerebellar regions. The influence of musical training was seen in the audio-tactile as well as in the auditory condition, indicating enhanced higher-order processing in musicians, while the sources of the tactile MMN were not influenced by long-term musical training. Consistent with the predictive coding model, more basic, bottom-up sensory processing was relatively stable and less affected by expertise, whereas areas for top-down models of multisensory expectancies were modulated by training.

  15. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    Science.gov (United States)

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  16. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  17. Binding and unbinding the auditory and visual streams in the McGurk effect.

    Science.gov (United States)

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  18. Brain responses and looking behaviour during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life.

    Directory of Open Access Journals (Sweden)

    Elena V Kushnerenko

    2013-07-01

    Full Text Available The use of visual cues during the processing of audiovisual speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6-9 months to 14-16 months of age. We used eye-tracking to examine whether individual differences in visual attention during audiovisual processing of speech in 6 to 9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6-9 month old infants also participated in an event-related potential (ERP audiovisual task within the same experimental session. Language development was then followed-up at the age of 14-16 months, using two measures of language development, the Preschool Language Scale (PLS and the Oxford Communicative Development Inventory (CDI. The results show that those infants who were less efficient in auditory speech processing at the age of 6-9 months had lower receptive language scores at 14-16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audio-visually incongruent stimuli at 6-9 months were both significantly associated with language development at 14-16 months. These findings add to the understanding of individual differences in neural signatures of audiovisual processing and associated looking behaviour in infants.

  19. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  20. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  1. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  2. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  3. Temporal Processing in Audition: Insights from Music.

    Science.gov (United States)

    Rajendran, Vani G; Teki, Sundeep; Schnupp, Jan W H

    2017-11-03

    Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation. Perception of musical rhythms relies on an ability to form temporal predictions, a general feature of temporal processing that is equally relevant to auditory scene analysis, pattern detection, and speech perception. By bringing together findings from the music and auditory literature, we hope to inspire researchers to look beyond the conventions of their respective fields and consider the cross-disciplinary implications of studying auditory temporal sequence processing. We begin by highlighting music as an interesting sound stimulus that may provide clues to how temporal patterning in sound drives perception. Next, we review the SMS literature and discuss possible neural substrates for the perception of, and synchronization to, musical beat. We then move away from music to explore the perceptual effects of rhythmic timing in pattern detection, auditory scene analysis, and speech perception. Finally, we review the neurophysiology of general timing processes that may underlie aspects of the perception of rhythmic patterns. We conclude with a brief summary and outlook for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. The spectrotemporal filter mechanism of auditory selective attention

    Science.gov (United States)

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  5. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  6. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia.

    Science.gov (United States)

    Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P

    2013-06-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.

  8. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  9. Functional magnetic resonance imaging measure of automatic and controlled auditory processing

    OpenAIRE

    Mitchell, Teresa V.; Morey, Rajendra A.; Inan, Seniha; Belger, Aysenil

    2005-01-01

    Activity within fronto-striato-temporal regions during processing of unattended auditory deviant tones and an auditory target detection task was investigated using event-related functional magnetic resonance imaging. Activation within the middle frontal gyrus, inferior frontal gyrus, anterior cingulate gyrus, superior temporal gyrus, thalamus, and basal ganglia were analyzed for differences in activity patterns between the two stimulus conditions. Unattended deviant tones elicited robust acti...

  10. Integrated community profiling indicates long-term temporal stability of the predominant faecal microbiota in captive cheetahs.

    Directory of Open Access Journals (Sweden)

    Anne A M J Becker

    Full Text Available Understanding the symbiotic relationship between gut microbes and their animal host requires characterization of the core microbiota across populations and in time. Especially in captive populations of endangered wildlife species such as the cheetah (Acinonyx jubatus, this knowledge is a key element to enhance feeding strategies and reduce gastrointestinal disorders. In order to investigate the temporal stability of the intestinal microbiota in cheetahs under human care, we conducted a longitudinal study over a 3-year period with bimonthly faecal sampling of 5 cheetahs housed in two European zoos. For this purpose, an integrated 16S rRNA DGGE-clone library approach was used in combination with a series of real-time PCR assays. Our findings disclosed a stable faecal microbiota, beyond intestinal community variations that were detected between zoo sample sets or between animals. The core of this microbiota was dominated by members of Clostridium clusters I, XI and XIVa, with mean concentrations ranging from 7.5-9.2 log10 CFU/g faeces and with significant positive correlations between these clusters (P<0.05, and by Lactobacillaceae. Moving window analysis of DGGE profiles revealed 23.3-25.6% change between consecutive samples for four of the cheetahs. The fifth animal in the study suffered from intermediate episodes of vomiting and diarrhea during the monitoring period and exhibited remarkably more change (39.4%. This observation may reflect the temporary impact of perturbations such as the animal's compromised health, antibiotic administration or a combination thereof, which temporarily altered the relative proportions of Clostridium clusters I and XIVa. In conclusion, this first long-term monitoring study of the faecal microbiota in feline strict carnivores not only reveals a remarkable compositional stability of this ecosystem, but also shows a qualitative and quantitative similarity in a defined set of faecal bacterial lineages across the five

  11. Integrated Community Profiling Indicates Long-Term Temporal Stability of the Predominant Faecal Microbiota in Captive Cheetahs

    Science.gov (United States)

    Becker, Anne A. M. J.; Janssens, Geert P. J.; Snauwaert, Cindy; Hesta, Myriam; Huys, Geert

    2015-01-01

    Understanding the symbiotic relationship between gut microbes and their animal host requires characterization of the core microbiota across populations and in time. Especially in captive populations of endangered wildlife species such as the cheetah (Acinonyx jubatus), this knowledge is a key element to enhance feeding strategies and reduce gastrointestinal disorders. In order to investigate the temporal stability of the intestinal microbiota in cheetahs under human care, we conducted a longitudinal study over a 3-year period with bimonthly faecal sampling of 5 cheetahs housed in two European zoos. For this purpose, an integrated 16S rRNA DGGE-clone library approach was used in combination with a series of real-time PCR assays. Our findings disclosed a stable faecal microbiota, beyond intestinal community variations that were detected between zoo sample sets or between animals. The core of this microbiota was dominated by members of Clostridium clusters I, XI and XIVa, with mean concentrations ranging from 7.5-9.2 log10 CFU/g faeces and with significant positive correlations between these clusters (Pcheetahs. The fifth animal in the study suffered from intermediate episodes of vomiting and diarrhea during the monitoring period and exhibited remarkably more change (39.4%). This observation may reflect the temporary impact of perturbations such as the animal’s compromised health, antibiotic administration or a combination thereof, which temporarily altered the relative proportions of Clostridium clusters I and XIVa. In conclusion, this first long-term monitoring study of the faecal microbiota in feline strict carnivores not only reveals a remarkable compositional stability of this ecosystem, but also shows a qualitative and quantitative similarity in a defined set of faecal bacterial lineages across the five animals under study that may typify the core phylogenetic microbiome of cheetahs. PMID:25905625

  12. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Bleichner, Martin G; Debener, Stefan

    2016-01-01

    Cochlear implant (CI) users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH) controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users' speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS). Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  13. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  14. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  15. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  16. Biological impact of music and software-based auditory training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based training can improve these biological signals. These findings of biological plasticity, in a variety of subject populations, relate to attention and auditory memory, and represent an integrated auditory system influenced by both sensation and cognition. Learning outcomes The reader will (1) understand that the auditory system is malleable to experience and training, (2) learn the ingredients necessary for auditory learning to successfully be applied to communication, (3) learn that the auditory brainstem response to complex sounds (cABR) is a window into the integrated auditory system, and (4) see examples of how cABR can be used to track the outcome of experience and training. PMID:22789822

  17. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  18. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  19. Dynamic links between theta executive functions and alpha storage buffers in auditory and visual working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2010-05-01

    Working memory (WM) tasks require not only distinct functions such as a storage buffer and central executive functions, but also coordination among these functions. Neuroimaging studies have revealed the contributions of different brain regions to different functional roles in WM tasks; however, little is known about the neural mechanism governing their coordination. Electroencephalographic (EEG) rhythms, especially theta and alpha, are known to appear over distributed brain regions during WM tasks, but the rhythms associated with task-relevant regional coupling have not been obtained thus far. In this study, we conducted time-frequency analyses for EEG data in WM tasks that include manipulation periods and memory storage buffer periods. We used both auditory WM tasks and visual WM tasks. The results successfully demonstrated function-specific EEG activities. The frontal theta amplitudes increased during the manipulation periods of both tasks. The alpha amplitudes increased during not only the manipulation but also the maintenance periods in the temporal area for the auditory WM and the parietal area for the visual WM. The phase synchronization analyses indicated that, under the relevant task conditions, the temporal and parietal regions show enhanced phase synchronization in the theta bands with the frontal region, whereas phase synchronization between theta and alpha is significantly enhanced only within the individual areas. Our results suggest that WM task-relevant brain regions are coordinated by distant theta synchronization for central executive functions, by local alpha synchronization for the memory storage buffer, and by theta-alpha coupling for inter-functional integration.

  20. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation

    Directory of Open Access Journals (Sweden)

    Yingqi Wan

    2018-06-01

    Full Text Available Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion. We demonstrated that both the ensemble (geometric mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last interval elicited more reports of group motion, whereas the shorter mean (or last auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.

  1. Quantifying stimulus-response rehabilitation protocols by auditory feedback in Parkinson's disease gait pattern

    Science.gov (United States)

    Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo

    2017-11-01

    External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and

  2. Delayed Mismatch Field Latencies in Autism Spectrum Disorder with Abnormal Auditory Sensitivity: A Magnetoencephalographic Study.

    Science.gov (United States)

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Sugata, Hisato; Hanaie, Ryuzo; Nagatani, Fumiyo; Yamamoto, Tomoka; Tachibana, Masaya; Tominaga, Koji; Hirata, Masayuki; Mohri, Ikuko; Taniike, Masako

    2017-01-01

    Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD), the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG) to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz) in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years) and 13 typically developing boys (mean age, 9.45 ± 1.51 years). We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.

  3. Delayed Mismatch Field Latencies in Autism Spectrum Disorder with Abnormal Auditory Sensitivity: A Magnetoencephalographic Study

    Directory of Open Access Journals (Sweden)

    Junko Matsuzaki

    2017-09-01

    Full Text Available Although abnormal auditory sensitivity is the most common sensory impairment associated with autism spectrum disorder (ASD, the neurophysiological mechanisms remain unknown. In previous studies, we reported that this abnormal sensitivity in patients with ASD is associated with delayed and prolonged responses in the auditory cortex. In the present study, we investigated alterations in residual M100 and MMFs in children with ASD who experience abnormal auditory sensitivity. We used magnetoencephalography (MEG to measure MMF elicited by an auditory oddball paradigm (standard tones: 300 Hz, deviant tones: 700 Hz in 20 boys with ASD (11 with abnormal auditory sensitivity: mean age, 9.62 ± 1.82 years, 9 without: mean age, 9.07 ± 1.31 years and 13 typically developing boys (mean age, 9.45 ± 1.51 years. We found that temporal and frontal residual M100/MMF latencies were significantly longer only in children with ASD who have abnormal auditory sensitivity. In addition, prolonged residual M100/MMF latencies were correlated with the severity of abnormal auditory sensitivity in temporal and frontal areas of both hemispheres. Therefore, our findings suggest that children with ASD and abnormal auditory sensitivity may have atypical neural networks in the primary auditory area, as well as in brain areas associated with attention switching and inhibitory control processing. This is the first report of an MEG study demonstrating altered MMFs to an auditory oddball paradigm in patients with ASD and abnormal auditory sensitivity. These findings contribute to knowledge of the mechanisms for abnormal auditory sensitivity in ASD, and may therefore facilitate development of novel clinical interventions.

  4. The neurokinin-3 receptor agonist senktide facilitates the integration of memories for object, place and temporal order into episodic memory.

    Science.gov (United States)

    Chao, Owen Y; Nikolaus, Susanne; Huston, Joseph P; de Souza Silva, Maria A

    2014-10-01

    Senktide, a potent neurokinin-3 receptor (NK3-R) agonist, has been shown to have promnestic effects in adult and aged rodents and to facilitate episodic-like memory (ELM) in mice when administrated before the learning trial. In the present study we assessed the effects of senktide on memory consolidation by administering it post-trial (after the learning trial) in adult rats. We applied an ELM test, based on the integrated memory for object, place and temporal order, which we developed (Kart-Teke, de Souza Silva, Huston, & Dere, 2006). This test involves two learning trials and one test trial. We examined intervals of 1h and 23 h between the learning and test trials (experiment 1) in untreated animals and found that they exhibited intact ELM after a delay of 1 h, but not 23 h. In another test for ELM performed 7 days later, vehicle or senktide (0.2 mg/kg, s.c.) was applied immediately after the second learning trial and the test was conducted 23 h later (experiment 2). Senktide treatment recovered components of ELM (memory for place and object) compared with vehicle-treated animals. After one more week, vehicle or senktide (0.2 mg/kg, s.c.) was applied post-trial and the test conducted 6h later (experiment 3). The senktide-treated group exhibited intact ELM, unlike the vehicle-treated group. Finally, animals received post-trial treatment with either vehicle or SR142801, a selective NK3-R antagonist (6 mg/kg, i.p.), 1 min before senktide injection (0.2 mg/kg, s.c.) in the ELM paradigm and were tested 6h later (experiment 4). The vehicle+senktide group showed intact ELM, while the SR142801+senktide group did not. The results indicate that senktide facilitated the consolidation or the expression of ELM and that the senktide effect was NK3-R dependent. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. High spatial-temporal resolution and integrated surface and subsurface precipitation-runoff modelling for a small stormwater catchment

    Science.gov (United States)

    Hailegeorgis, Teklu T.; Alfredsen, Knut

    2018-02-01

    Reliable runoff estimation is important for design of water infrastructure and flood risk management in urban catchments. We developed a spatially distributed Precipitation-Runoff (P-R) model that explicitly represents the land cover information, performs integrated modelling of surface and subsurface components of the urban precipitation water cycle and flow routing. We conducted parameter calibration and validation for a small (21.255 ha) stormwater catchment in Trondheim City during Summer-Autumn events and season, and snow-influenced Winter-Spring seasons at high spatial and temporal resolutions of respectively 5 m × 5 m grid size and 2 min. The calibration resulted in good performance measures (Nash-Sutcliffe efficiency, NSE = 0.65-0.94) and acceptable validation NSE for the seasonal and snow-influenced periods. The infiltration excess surface runoff dominates the peak flows while the contribution of subsurface flow to the sewer pipes also augments the peak flows. Based on the total volumes of simulated flow in sewer pipes (Qsim) and precipitation (P) during the calibration periods, the Qsim/P ranges from 21.44% for an event to 56.50% for the Winter-Spring season, which are in close agreement with the observed volumes (Qobs/P). The lowest percentage of precipitation volume that is transformed to the total simulated runoff in the catchment (QT) is 79.77%. Computation of evapotranspiration (ET) indicated that the ET/P is less than 3% for the events and snow-influenced seasons while it is about 18% for the Summer-Autumn season. The subsurface flow contribution to the sewer pipes are markedly higher than the total surface runoff volume for some events and the Summer-Autumn season. The peakiest flow rates correspond to the Winter-Spring season. Therefore, urban runoff simulation for design and management purposes should include two-way interactions between the subsurface runoff and flow in sewer pipes, and snow-influenced seasons. The developed urban P-R model is

  6. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  7. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  8. Simultaneity and Temporal Order Judgments Are Coded Differently and Change With Age: An Event-Related Potential Study

    Directory of Open Access Journals (Sweden)

    Aysha Basharat

    2018-04-01

    Full Text Available Multisensory integration is required for a number of daily living tasks where the inability to accurately identify simultaneity and temporality of multisensory events results in errors in judgment leading to poor decision-making and dangerous behavior. Previously, our lab discovered that older adults exhibited impaired timing of audiovisual events, particularly when making temporal order judgments (TOJs. Simultaneity judgments (SJs, however, were preserved across the lifespan. Here, we investigate the difference between the TOJ and SJ tasks in younger and older adults to assess neural processing differences between these two tasks and across the lifespan. Event-related potentials (ERPs were studied to determine between-task and between-age differences. Results revealed task specific differences in perceiving simultaneity and temporal order, suggesting that each task may be subserved via different neural mechanisms. Here, auditory N1 and visual P1 ERP amplitudes confirmed that unisensory processing of audiovisual stimuli did not differ between the two tasks within both younger and older groups, indicating that performance differences between tasks arise either from multisensory integration or higher-level decision-making. Compared to younger adults, older adults showed a sustained higher auditory N1 ERP amplitude response across SOAs, suggestive of broader response properties from an extended temporal binding window. Our work provides compelling evidence that different neural mechanisms subserve the SJ and TOJ tasks and that simultaneity and temporal order perception are coded differently and change with age.

  9. Auditory hallucinations: A review of the ERC "VOICE" project.

    Science.gov (United States)

    Hugdahl, Kenneth

    2015-06-22

    In this invited review I provide a selective overview of recent research on brain mechanisms and cognitive processes involved in auditory hallucinations. The review is focused on research carried out in the "VOICE" ERC Advanced Grant Project, funded by the European Research Council, but I also review and discuss the literature in general. Auditory hallucinations are suggested to be perceptual phenomena, with a neuronal origin in the speech perception areas in the temporal lobe. The phenomenology of auditory hallucinations is conceptualized along three domains, or dimensions; a perceptual dimension, experienced as someone speaking to the patient; a cognitive dimension, experienced as an inability to inhibit, or ignore the voices, and an emotional dimension, experienced as the "voices" having primarily a negative, or sinister, emotional tone. I will review cognitive, imaging, and neurochemistry data related to these dimensions, primarily the first two. The reviewed data are summarized in a model that sees auditory hallucinations as initiated from temporal lobe neuronal hyper-activation that draws attentional focus inward, and which is not inhibited due to frontal lobe hypo-activation. It is further suggested that this is maintained through abnormal glutamate and possibly gamma-amino-butyric-acid transmitter mediation, which could point towards new pathways for pharmacological treatment. A final section discusses new methods of acquiring quantitative data on the phenomenology and subjective experience of auditory hallucination that goes beyond standard interview questionnaires, by suggesting an iPhone/iPod app.

  10. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  11. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-