WorldWideScience

Sample records for temporal visual area

  1. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    Science.gov (United States)

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated

  2. Early Local Activity in Temporal Areas Reflects Graded Content of Visual Perception

    Directory of Open Access Journals (Sweden)

    Chiara Francesca Tagliabue

    2016-04-01

    Full Text Available In visual cognitive neuroscience the debate on consciousness is focused on two major topics: the search for the neural correlates of the different properties of visual awareness and the controversy on the graded versus dichotomous nature of visual conscious experience. The aim of this study is to search for the possible neural correlates of different grades of visual awareness investigating the Event Related Potentials (ERPs to reduced contrast visual stimuli whose perceptual clarity was rated on the four-point Perceptual Awareness Scale (PAS. Results revealed a left centro-parietal negative deflection (Visual Awareness Negativity; VAN peaking at 280-320 ms from stimulus onset, related to the perceptual content of the stimulus, followed by a bilateral positive deflection (Late Positivity; LP peaking at 510-550 ms over almost all electrodes, reflecting post-perceptual processes performed on such content. Interestingly, the amplitude of both deflections gradually increased as a function of visual awareness. Moreover, the intracranial generators of the phenomenal content (VAN were found to be located in the left temporal lobe. The present data thus seem to suggest 1 that visual conscious experience is characterized by a gradual increase of perceived clarity at both behavioral and neural level and 2 that the actual content of perceptual experiences emerges from early local activation in temporal areas, without the need of later widespread frontal engagement.

  3. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    Science.gov (United States)

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  4. Distinct spatio-temporal profiles of beta-oscillations within visual and sensorimotor areas during action recognition as revealed by MEG.

    Science.gov (United States)

    Pavlidou, Anastasia; Schnitzler, Alfons; Lange, Joachim

    2014-05-01

    The neural correlates of action recognition have been widely studied in visual and sensorimotor areas of the human brain. However, the role of neuronal oscillations involved during the process of action recognition remains unclear. Here, we were interested in how the plausibility of an action modulates neuronal oscillations in visual and sensorimotor areas. Subjects viewed point-light displays (PLDs) of biomechanically plausible and implausible versions of the same actions. Using magnetoencephalography (MEG), we examined dynamic changes of oscillatory activity during these action recognition processes. While both actions elicited oscillatory activity in visual and sensorimotor areas in several frequency bands, a significant difference was confined to the beta-band (∼20 Hz). An increase of power for plausible actions was observed in left temporal, parieto-occipital and sensorimotor areas of the brain, in the beta-band in successive order between 1650 and 2650 msec. These distinct spatio-temporal beta-band profiles suggest that the action recognition process is modulated by the degree of biomechanical plausibility of the action, and that spectral power in the beta-band may provide a functional interaction between visual and sensorimotor areas in humans. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Visual perception and memory systems: from cortex to medial temporal lobe.

    Science.gov (United States)

    Khan, Zafar U; Martín-Montañez, Elisa; Baxter, Mark G

    2011-05-01

    Visual perception and memory are the most important components of vision processing in the brain. It was thought that the perceptual aspect of a visual stimulus occurs in visual cortical areas and that this serves as the substrate for the formation of visual memory in a distinct part of the brain called the medial temporal lobe. However, current evidence indicates that there is no functional separation of areas. Entire visual cortical pathways and connecting medial temporal lobe are important for both perception and visual memory. Though some aspects of this view are debated, evidence from both sides will be explored here. In this review, we will discuss the anatomical and functional architecture of the entire system and the implications of these structures in visual perception and memory.

  6. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    Science.gov (United States)

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  7. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  8. Non-retinotopic motor-visual recalibration to temporal lag

    Directory of Open Access Journals (Sweden)

    Masaki eTsujita

    2012-11-01

    Full Text Available Temporal order judgment between the voluntary motor action and its perceptual feedback is important in distinguishing between a sensory feedback which is caused by observer’s own action and other stimulus, which are irrelevant to that action. Prolonged exposure to fixed temporal lag between motor action and visual feedback recalibrates motor-visual temporal relationship, and consequently shifts the point of subjective simultaneity (PSS. Previous studies on the audio-visual temporal recalibration without voluntary action revealed that both low and high level processing are involved. However, it is not clear how the low and high level processings affect the recalibration to constant temporal lag between voluntary action and visual feedback. This study examined retinotopic specificity of the motor-visual temporal recalibration. During the adaptation phase, observers repeatedly pressed a key, and visual stimulus was presented in left or right visual field with a fixed temporal lag (0 or 200 ms. In the test phase, observers performed a temporal order judgment for observer’s voluntary keypress and test stimulus, which was presented in the same as or opposite to the visual field in which the stimulus was presented in the adaptation phase. We found that the PSS was shifted toward the exposed lag in both visual fields. These results suggest that the low visual processing, which is retinotopically specific, has minor contribution to the multimodal adaptation, and that the adaptation to shift the PSS mainly depends upon the high level processing such as attention to specific properties of the stimulus.

  9. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    Science.gov (United States)

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  10. Sound improves diminished visual temporal sensitivity in schizophrenia

    NARCIS (Netherlands)

    de Boer-Schellekens, L.; Stekelenburg, J.J.; Maes, J.P.; van Gool, A.R.; Vroomen, J.

    2014-01-01

    Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order

  11. Attention Increases Spike Count Correlations between Visual Cortical Areas

    Science.gov (United States)

    Cohen, Marlene R.

    2016-01-01

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. SIGNIFICANCE STATEMENT Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously

  12. Attention Increases Spike Count Correlations between Visual Cortical Areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2016-07-13

    Visual attention, which improves perception of attended locations or objects, has long been known to affect many aspects of the responses of neuronal populations in visual cortex. There are two nonmutually exclusive hypotheses concerning the neuronal mechanisms that underlie these perceptual improvements. The first hypothesis, that attention improves the information encoded by a population of neurons in a particular cortical area, has considerable physiological support. The second hypothesis is that attention improves perception by selectively communicating relevant visual information. This idea has been tested primarily by measuring interactions between neurons on very short timescales, which are mathematically nearly independent of neuronal interactions on longer timescales. We tested the hypothesis that attention changes the way visual information is communicated between cortical areas on longer timescales by recording simultaneously from neurons in primary visual cortex (V1) and the middle temporal area (MT) in rhesus monkeys. We used two independent and complementary approaches. Our correlative experiment showed that attention increases the trial-to-trial response variability that is shared between the two areas. In our causal experiment, we electrically microstimulated V1 and found that attention increased the effect of stimulation on MT responses. Together, our results suggest that attention affects both the way visual stimuli are encoded within a cortical area and the extent to which visual information is communicated between areas on behaviorally relevant timescales. Visual attention dramatically improves the perception of attended stimuli. Attention has long been thought to act by selecting relevant visual information for further processing. It has been hypothesized that this selection is accomplished by increasing communication between neurons that encode attended information in different cortical areas. We recorded simultaneously from neurons in primary

  13. Attractive faces temporally modulate visual attention

    Science.gov (United States)

    Nakamura, Koyo; Kawabata, Hideaki

    2014-01-01

    Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994

  14. Attractive faces temporally modulate visual attention

    Directory of Open Access Journals (Sweden)

    Koyo eNakamura

    2014-06-01

    Full Text Available Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation (RSVP. Fourteen male faces and two female faces were successively presented for 160 ms respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2 was impaired when a first target (T1 was attractive compared to neutral or unattractive faces, at 320 ms SOA; identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention.

  15. Sunglasses with thick temples and frame constrict temporal visual field.

    Science.gov (United States)

    Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric

    2013-12-01

    Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p thick sunglasses" than with the "thin sunglasses" (p thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.

  16. Visually defining and querying consistent multi-granular clinical temporal abstractions.

    Science.gov (United States)

    Combi, Carlo; Oliboni, Barbara

    2012-02-01

    The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on

  17. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    Science.gov (United States)

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  18. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    Science.gov (United States)

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue

    2006-01-01

    of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize...... that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...

  20. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    Science.gov (United States)

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then

  1. Using time-to-contact information to assess potential collision modulates both visual and temporal prediction networks

    Directory of Open Access Journals (Sweden)

    Jennifer T Coull

    2008-09-01

    Full Text Available Accurate estimates of the time-to-contact (TTC of approaching objects are crucial for survival. We used an ecologically valid driving simulation to compare and contrast the neural substrates of egocentric (head-on approach and allocentric (lateral approach TTC tasks in a fully factorial, event-related fMRI design. Compared to colour control tasks, both egocentric and allocentric TTC tasks activated left ventral premotor cortex/frontal operculum and inferior parietal cortex, the same areas that have previously been implicated in temporal attentional orienting. Despite differences in visual and cognitive demands, both TTC and temporal orienting paradigms encourage the use of temporally predictive information to guide behaviour, suggesting these areas may form a core network for temporal prediction. We also demonstrated that the temporal derivative of the perceptual index tau (tau-dot held predictive value for making collision judgements and varied inversely with activity in primary visual cortex (V1. Specifically, V1 activity increased with the increasing likelihood of reporting a collision, suggesting top-down attentional modulation of early visual processing areas as a function of subjective collision. Finally, egocentric viewpoints provoked a response bias for reporting collisions, rather than no-collisions, reflecting increased caution for head-on approaches. Associated increases in SMA activity suggest motor preparation mechanisms were engaged, despite the perceptual nature of the task.

  2. Visual search of cyclic spatio-temporal events

    Science.gov (United States)

    Gautier, Jacques; Davoine, Paule-Annick; Cunty, Claire

    2018-05-01

    The analysis of spatio-temporal events, and especially of relationships between their different dimensions (space-time-thematic attributes), can be done with geovisualization interfaces. But few geovisualization tools integrate the cyclic dimension of spatio-temporal event series (natural events or social events). Time Coil and Time Wave diagrams represent both the linear time and the cyclic time. By introducing a cyclic temporal scale, these diagrams may highlight the cyclic characteristics of spatio-temporal events. However, the settable cyclic temporal scales are limited to usual durations like days or months. Because of that, these diagrams cannot be used to visualize cyclic events, which reappear with an unusual period, and don't allow to make a visual search of cyclic events. Also, they don't give the possibility to identify the relationships between the cyclic behavior of the events and their spatial features, and more especially to identify localised cyclic events. The lack of possibilities to represent the cyclic time, outside of the temporal diagram of multi-view geovisualization interfaces, limits the analysis of relationships between the cyclic reappearance of events and their other dimensions. In this paper, we propose a method and a geovisualization tool, based on the extension of Time Coil and Time Wave, to provide a visual search of cyclic events, by allowing to set any possible duration to the diagram's cyclic temporal scale. We also propose a symbology approach to push the representation of the cyclic time into the map, in order to improve the analysis of relationships between space and the cyclic behavior of events.

  3. Unpredictable visual changes cause temporal memory averaging.

    Science.gov (United States)

    Ohyama, Junji; Watanabe, Katsumi

    2007-09-01

    Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change.

  4. Dynamic spatial organization of the occipito-temporal word form area for second language processing.

    Science.gov (United States)

    Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li

    2017-08-01

    Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017

  5. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  6. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    Science.gov (United States)

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  7. Preoperative visual field deficits in temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Sanjeet S. Grewal

    2017-01-01

    Full Text Available Surgical resection and laser thermoablation have been used to treat drug resistant epilepsy with good results. However, they are not without risk. One of the most commonly reported complications of temporal lobe surgery is contralateral superior homonymous quadrantanopsia. We describe a patient with asymptomatic preoperative quadrantanopsia fortuitously discovered as part of our recently modified protocol to evaluate patients prior to temporal lobe epilepsy surgery. This visual field deficit was subtle and not detected on routine clinical neurological examination. While we understand that this is a single case, we advocate further study for more detailed preoperative visual field examinations to characterize the true incidence of postoperative visual field lesions.

  8. Encoding model of temporal processing in human visual cortex.

    Science.gov (United States)

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  9. Role of Fusiform and Anterior Temporal Cortical Areas in Facial Recognition

    Science.gov (United States)

    Nasr, Shahin; Tootell, Roger BH

    2012-01-01

    Recent FMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus (‘AT’; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex. PMID:23034518

  10. Enhanced Visual Temporal Resolution in Autism Spectrum Disorders

    NARCIS (Netherlands)

    Falter, Christine M.; Elliott, Mark A.; Bailey, Anthony J.

    2012-01-01

    Cognitive functions that rely on accurate sequencing of events, such as action planning and execution, verbal and nonverbal communication, and social interaction rely on well-tuned coding of temporal event-structure. Visual temporal event-structure coding was tested in 17 high-functioning

  11. Cluster Oriented Spatio Temporal Multidimensional Data Visualization of Earthquakes in Indonesia

    Directory of Open Access Journals (Sweden)

    Mohammad Nur Shodiq

    2016-03-01

    Full Text Available Spatio temporal data clustering is challenge task. The result of clustering data are utilized to investigate the seismic parameters. Seismic parameters are used to describe the characteristics of earthquake behavior. One of the effective technique to study multidimensional spatio temporal data is visualization. But, visualization of multidimensional data is complicated problem. Because, this analysis consists of observed data cluster and seismic parameters. In this paper, we propose a visualization system, called as IES (Indonesia Earthquake System, for cluster analysis, spatio temporal analysis, and visualize the multidimensional data of seismic parameters. We analyze the cluster analysis by using automatic clustering, that consists of get optimal number of cluster and Hierarchical K-means clustering. We explore the visual cluster and multidimensional data in low dimensional space visualization. We made experiment with observed data, that consists of seismic data around Indonesian archipelago during 2004 to 2014. Keywords: Clustering, visualization, multidimensional data, seismic parameters.

  12. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue

    Directory of Open Access Journals (Sweden)

    Ashley J Booth

    2015-06-01

    Full Text Available The ease of synchronising movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronising with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g. a dot following an oscillatory trajectory. Similarly, when synchronising with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals’ ability to synchronise movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centred on a large projection screen. The target dot was surrounded by 2, 8 or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100 or 200ms. We found participants’ timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14. This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronise movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  13. Visualization and assessment of spatio-temporal covariance properties

    KAUST Repository

    Huang, Huang

    2017-11-23

    Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.

  14. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  15. Brain SPECT in mesial temporal lobe epilepsy: comparison between visual analysis and SPM (Statistical Parametric Mapping)

    Energy Technology Data Exchange (ETDEWEB)

    Amorim, Barbara Juarez; Ramos, Celso Dario; Santos, Allan Oliveira dos; Lima, Mariana da Cunha Lopes de; Camargo, Edwaldo Eduardo; Etchebehere, Elba Cristina Sa de Camargo, E-mail: juarezbarbara@hotmail.co [State University of Campinas (UNICAMP), SP (Brazil). School of Medical Sciences. Dept. of Radiology; Min, Li Li; Cendes, Fernando [State University of Campinas (UNICAMP), SP (Brazil). School of Medical Sciences. Dept. of Neurology

    2010-04-15

    Objective: to compare the accuracy of SPM and visual analysis of brain SPECT in patients with mesial temporal lobe epilepsy (MTLE). Method: interictal and ictal SPECTs of 22 patients with MTLE were performed. Visual analysis were performed in interictal (VISUAL(inter)) and ictal (VISUAL(ictal/inter)) studies. SPM analysis consisted of comparing interictal (SPM(inter)) and ictal SPECTs (SPM(ictal)) of each patient to control group and by comparing perfusion of temporal lobes in ictal and interictal studies among themselves (SPM(ictal/inter)). Results: for detection of the epileptogenic focus, the sensitivities were as follows: VISUAL(inter)=68%; VISUAL(ictal/inter)=100%; SPM(inter)=45%; SPM(ictal)=64% and SPM(ictal/inter)=77%. SPM was able to detect more areas of hyperperfusion and hypoperfusion. Conclusion: SPM did not improve the sensitivity to detect epileptogenic focus. However, SPM detected different regions of hypoperfusion and hyperperfusion and is therefore a helpful tool for better understand pathophysiology of seizures in MTLE. (author)

  16. Brain SPECT in mesial temporal lobe epilepsy: comparison between visual analysis and SPM (Statistical Parametric Mapping)

    International Nuclear Information System (INIS)

    Amorim, Barbara Juarez; Ramos, Celso Dario; Santos, Allan Oliveira dos; Lima, Mariana da Cunha Lopes de; Camargo, Edwaldo Eduardo; Etchebehere, Elba Cristina Sa de Camargo; Min, Li Li; Cendes, Fernando

    2010-01-01

    Objective: to compare the accuracy of SPM and visual analysis of brain SPECT in patients with mesial temporal lobe epilepsy (MTLE). Method: interictal and ictal SPECTs of 22 patients with MTLE were performed. Visual analysis were performed in interictal (VISUAL(inter)) and ictal (VISUAL(ictal/inter)) studies. SPM analysis consisted of comparing interictal (SPM(inter)) and ictal SPECTs (SPM(ictal)) of each patient to control group and by comparing perfusion of temporal lobes in ictal and interictal studies among themselves (SPM(ictal/inter)). Results: for detection of the epileptogenic focus, the sensitivities were as follows: VISUAL(inter)=68%; VISUAL(ictal/inter)=100%; SPM(inter)=45%; SPM(ictal)=64% and SPM(ictal/inter)=77%. SPM was able to detect more areas of hyperperfusion and hypoperfusion. Conclusion: SPM did not improve the sensitivity to detect epileptogenic focus. However, SPM detected different regions of hypoperfusion and hyperperfusion and is therefore a helpful tool for better understand pathophysiology of seizures in MTLE. (author)

  17. Temporal visual field defects are associated with monocular inattention in chiasmal pathology.

    Science.gov (United States)

    Fledelius, Hans C

    2009-11-01

    Chiasmal lesions have been shown to give rise occasionally to uni-ocular temporal inattention, which cannot be compensated for by volitional eye movement. This article describes the assessments of 46 such patients with chiasmal pathology. It aims to determine the clinical spectrum of this disorder, including interference with reading. Retrospective consecutive observational clinical case study over a 7-year period comprising 46 patients with chiasmal field loss of varying degrees. Observation of reading behaviour during monocular visual acuity testing ascertained from consecutive patients who appeared unable to read optotypes on the temporal side of the chart. Visual fields were evaluated by kinetic (Goldmann) and static (Octopus) techniques. Five patients who clearly manifested this condition are presented in more detail. The results of visual field testing were related to absence or presence of uni-ocular visual inattentive behaviour for distance visual acuity testing and/or reading printed text. Despite normal eye movements, the 46 patients making up the clinical series perceived only optotypes in the nasal part of the chart, in one eye or in both, when tested for each eye in turn. The temporal optotypes were ignored, and this behaviour persisted despite instruction to search for any additional letters temporal to those, which had been seen. This phenomenon of unilateral visual inattention held for both eyes in 18 and was unilateral in the remaining 28 patients. Partial or full reversibility after treatment was recorded in 21 of the 39 for whom reliable follow-up data were available. Reading a text was affected in 24 individuals, and permanently so in six. A neglect-like spatial unawareness and a lack of cognitive compensation for varying degrees of temporal visual field loss were present in all the patients observed. Not only is visual field loss a feature of chiasmal pathology, but the higher visual function of affording attention within the temporal visual

  18. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  19. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  20. Linguistic processing in visual and modality-nonspecific brain areas: PET recordings during selective attention.

    Science.gov (United States)

    Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto

    2004-07-01

    Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.

  1. Evaluating spatial- and temporal-oriented multi-dimensional visualization techniques

    Directory of Open Access Journals (Sweden)

    Chong Ho Yu

    2003-07-01

    Full Text Available Visualization tools are said to be helpful for researchers to unveil hidden patterns and..relationships among variables, and also for teachers to present abstract statistical concepts and..complicated data structures in a concrete manner. However, higher-dimension visualization..techniques can be confusing and even misleading, especially when human-instrument interface..and cognitive issues are under-applied. In this article, the efficacy of function-based, datadriven,..spatial-oriented, and temporal-oriented visualization techniques are discussed based..upon extensive review. Readers can find practical implications for both research and..instructional practices. For research purposes, the spatial-based graphs, such as Trellis displays..in S-Plus, are preferable over the temporal-based displays, such as the 3D animated plot in..SAS/Insight. For teaching purposes, the temporal-based displays, such as the 3D animation plot..in Maple, seem to have advantages over the spatial-based graphs, such as the 3D triangular..coordinate plot in SyStat.

  2. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    Science.gov (United States)

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. Temporal Visualization for Legal Case Histories.

    Science.gov (United States)

    Harris, Chanda; Allen, Robert B.; Plaisant, Catherine; Shneiderman, Ben

    1999-01-01

    Discusses visualization of legal information using a tool for temporal information called "LifeLines." Explores ways "LifeLines" could aid in viewing the links between original case and direct and indirect case histories. Uses the case of Apple Computer, Inc. versus Microsoft Corporation and Hewlett Packard Company to…

  4. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  5. Perceptual learning modifies the functional specializations of visual cortical areas.

    Science.gov (United States)

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  6. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    Science.gov (United States)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  7. VISUALIZATION OF SPATIO-TEMPORAL RELATIONS IN MOVEMENT EVENT USING MULTI-VIEW

    Directory of Open Access Journals (Sweden)

    K. Zheng

    2017-09-01

    Full Text Available Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  8. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    Science.gov (United States)

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  9. Neuronal correlate of visual associative long-term memory in the primate temporal cortex

    Science.gov (United States)

    Miyashita, Yasushi

    1988-10-01

    In human long-term memory, ideas and concepts become associated in the learning process1. No neuronal correlate for this cognitive function has so far been described, except that memory traces are thought to be localized in the cerebral cortex; the temporal lobe has been assigned as the site for visual experience because electric stimulation of this area results in imagery recall,2 and lesions produce deficits in visual recognition of objects3-9. We previously reported that in the anterior ventral temporal cortex of monkeys, individual neurons have a sustained activity that is highly selective for a few of the 100 coloured fractal patterns used in a visual working-memory task10. Here I report the development of this selectivity through repeated trials involving the working memory. The few patterns for which a neuron was conjointly selective were frequently related to each other through stimulus-stimulus association imposed during training. The results indicate that the selectivity acquired by these cells represents a neuronal correlate of the associative long-term memory of pictures.

  10. Consolidation of visual associative long-term memory in the temporal cortex of primates.

    Science.gov (United States)

    Miyashita, Y; Kameyama, M; Hasegawa, I; Fukushima, T

    1998-01-01

    Neuropsychological theories have proposed a critical role for the interaction between the medial temporal lobe and the neocortex in the formation of long-term memory for facts and events, which has often been tested by learning of a series of paired words or figures in humans. We have examined neural mechanisms underlying the memory "consolidation" process by single-unit recording and molecular biological methods in an animal model of a visual pair-association task in monkeys. In our previous studies, we found that long-term associative representations of visual objects are acquired through learning in the neural network of the anterior inferior temporal (IT) cortex. In this article, we propose the hypothesis that limbic neurons undergo rapid modification of synaptic connectivity and provide backward signals that guide the reorganization of neocortical neural circuits. Two experiments tested this hypothesis: (1) we examined the role of the backward connections from the medial temporal lobe to the IT cortex by injecting ibotenic acid into the entorhinal and perirhinal cortices, which provided massive backward projections ipsilaterally to the IT cortex. We found that the limbic lesion disrupted the associative code of the IT neurons between the paired associates, without impairing the visual response to each stimulus. (2) We then tested the first half of this hypothesis by detecting the expression of immediate-early genes in the monkey temporal cortex. We found specific expression of zif268 during the learning of a new set of paired associates in the pair-association task, most intensively in area 36 of the perirhinal cortex. All these results with the visual pair-association task support our hypothesis and demonstrate that the consolidation process, which was first proposed on the basis of clinico-psychological evidence, can now be examined in primates using neurophysiolocical and molecular biological approaches. Copyright 1998 Academic Press.

  11. Holistic face categorization in higher-level cortical visual areas of the normal and prosopagnosic brain: towards a non-hierarchical view of face perception

    Directory of Open Access Journals (Sweden)

    Bruno Rossion

    2011-01-01

    Full Text Available How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (2-tones Mooney figures and Arcimboldo’s facelike paintings. Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (Fusiform face area, FFA and superior temporal sulcus (pSTS, with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no occipital face area, OFA. This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient (PS whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex.

  12. Temporal versus Superior Limbal Incision: Any difference in visual ...

    African Journals Online (AJOL)

    Aim: To compare the visual outcome of a superiorly placed limbal incision with a temporal limbal incision in extracapsular cataract surgery. The main outcome measures are visual acuity and the degree of stigmatism based on refraction. Method: A retrospective non randomized comparative study. Medical records of 40 ...

  13. Temporal stability of visual search-driven biometrics

    Science.gov (United States)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  14. Gender-specific effects of emotional modulation on visual temporal order thresholds.

    Science.gov (United States)

    Liang, Wei; Zhang, Jiyuan; Bao, Yan

    2015-09-01

    Emotions affect temporal information processing in the low-frequency time window of a few seconds, but little is known about their effect in the high-frequency domain of some tens of milliseconds. The present study aims to investigate whether negative and positive emotional states influence the ability to discriminate the temporal order of visual stimuli, and whether gender plays a role in temporal processing. Due to the hemispheric lateralization of emotion, a hemispheric asymmetry between the left and the right visual field might be expected. Using a block design, subjects were primed with neutral, negative and positive emotional pictures before performing temporal order judgment tasks. Results showed that male subjects exhibited similarly reduced order thresholds under negative and positive emotional states, while female subjects demonstrated increased threshold under positive emotional state and reduced threshold under negative emotional state. Besides, emotions influenced female subjects more intensely than male subjects, and no hemispheric lateralization was observed. These observations indicate an influence of emotional states on temporal order processing of visual stimuli, and they suggest a gender difference, which is possibly associated with a different emotional stability.

  15. Decoding visual object categories from temporal correlations of ECoG signals.

    Science.gov (United States)

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Language and Visual Perception Associations: Meta-Analytic Connectivity Modeling of Brodmann Area 37

    Directory of Open Access Journals (Sweden)

    Alfredo Ardila

    2015-01-01

    Full Text Available Background. Understanding the functions of different brain areas has represented a major endeavor of neurosciences. Historically, brain functions have been associated with specific cortical brain areas; however, modern neuroimaging developments suggest cognitive functions are associated to networks rather than to areas. Objectives. The purpose of this paper was to analyze the connectivity of Brodmann area (BA 37 (posterior, inferior, and temporal/fusiform gyrus in relation to (1 language and (2 visual processing. Methods. Two meta-analyses were initially conducted (first level analysis. The first one was intended to assess the language network in which BA37 is involved. The second one was intended to assess the visual perception network. A third meta-analysis (second level analysis was then performed to assess contrasts and convergence between the two cognitive domains (language and visual perception. The DataBase of Brainmap was used. Results. Our results support the role of BA37 in language but by means of a distinct network from the network that supports its second most important function: visual perception. Conclusion. It was concluded that left BA37 is a common node of two distinct networks—visual recognition (perception and semantic language functions.

  17. Language and visual perception associations: meta-analytic connectivity modeling of Brodmann area 37.

    Science.gov (United States)

    Ardila, Alfredo; Bernal, Byron; Rosselli, Monica

    2015-01-01

    Understanding the functions of different brain areas has represented a major endeavor of neurosciences. Historically, brain functions have been associated with specific cortical brain areas; however, modern neuroimaging developments suggest cognitive functions are associated to networks rather than to areas. The purpose of this paper was to analyze the connectivity of Brodmann area (BA) 37 (posterior, inferior, and temporal/fusiform gyrus) in relation to (1) language and (2) visual processing. Two meta-analyses were initially conducted (first level analysis). The first one was intended to assess the language network in which BA37 is involved. The second one was intended to assess the visual perception network. A third meta-analysis (second level analysis) was then performed to assess contrasts and convergence between the two cognitive domains (language and visual perception). The DataBase of Brainmap was used. Our results support the role of BA37 in language but by means of a distinct network from the network that supports its second most important function: visual perception. It was concluded that left BA37 is a common node of two distinct networks-visual recognition (perception) and semantic language functions.

  18. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  19. Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices.

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K; Petkov, Christopher I

    2014-02-12

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies.

  20. VAUD: A Visual Analysis Approach for Exploring Spatio-Temporal Urban Data.

    Science.gov (United States)

    Chen, Wei; Huang, Zhaosong; Wu, Feiran; Zhu, Minfeng; Guan, Huihua; Maciejewski, Ross

    2017-10-02

    Urban data is massive, heterogeneous, and spatio-temporal, posing a substantial challenge for visualization and analysis. In this paper, we design and implement a novel visual analytics approach, Visual Analyzer for Urban Data (VAUD), that supports the visualization, querying, and exploration of urban data. Our approach allows for cross-domain correlation from multiple data sources by leveraging spatial-temporal and social inter-connectedness features. Through our approach, the analyst is able to select, filter, aggregate across multiple data sources and extract information that would be hidden to a single data subset. To illustrate the effectiveness of our approach, we provide case studies on a real urban dataset that contains the cyber-, physical-, and socialinformation of 14 million citizens over 22 days.

  1. Lateralization of spatial rather than temporal attention underlies the left hemifield advantage in rapid serial visual presentation.

    Science.gov (United States)

    Asanowicz, Dariusz; Kruse, Lena; Śmigasiewicz, Kamila; Verleger, Rolf

    2017-11-01

    In bilateral rapid serial visual presentation (RSVP), the second of two targets, T1 and T2, is better identified in the left visual field (LVF) than in the right visual field (RVF). This LVF advantage may reflect hemispheric asymmetry in temporal attention or/and in spatial orienting of attention. Participants performed two tasks: the "standard" bilateral RSVP task (Exp.1) and its unilateral variant (Exp.1 & 2). In the bilateral task, spatial location was uncertain, thus target identification involved stimulus-driven spatial orienting. In the unilateral task, the targets were presented block-wise in the LVF or RVF only, such that no spatial orienting was needed for target identification. Temporal attention was manipulated in both tasks by varying the T1-T2 lag. The results showed that the LVF advantage disappeared when involvement of stimulus-driven spatial orienting was eliminated, whereas the manipulation of temporal attention had no effect on the asymmetry. In conclusion, the results do not support the hypothesis of hemispheric asymmetry in temporal attention, and provide further evidence that the LVF advantage reflects right hemisphere predominance in stimulus-driven orienting of spatial attention. These conclusions fit evidence that temporal attention is implemented by bilateral parietal areas and spatial attention by the right-lateralized ventral frontoparietal network. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    Science.gov (United States)

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  3. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  4. Structural and effective connectivity reveals potential network-based influences on category-sensitive visual areas

    Directory of Open Access Journals (Sweden)

    Nicholas eFurl

    2015-05-01

    Full Text Available Visual category perception is thought to depend on brain areas that respond specifically when certain categories are viewed. These category-sensitive areas are often assumed to be modules (with some degree of processing autonomy and to act predominantly on feedforward visual input. This modular view can be complemented by a view that treats brain areas as elements within more complex networks and as influenced by network properties. This network-oriented viewpoint is emerging from studies using either diffusion tensor imaging to map structural connections or effective connectivity analyses to measure how their functional responses influence each other. This literature motivates several hypotheses that predict category-sensitive activity based on network properties. Large, long-range fiber bundles such as inferior fronto-occipital, arcuate and inferior longitudinal fasciculi are associated with behavioural recognition and could play crucial roles in conveying backward influences on visual cortex from anterior temporal and frontal areas. Such backward influences could support top-down functions such as visual search and emotion-based visual modulation. Within visual cortex itself, areas sensitive to different categories appear well-connected (e.g., face areas connect to object- and motion sensitive areas and their responses can be predicted by backward modulation. Evidence supporting these propositions remains incomplete and underscores the need for better integration of DTI and functional imaging.

  5. Temporal Expectations Guide Dynamic Prioritization in Visual Working Memory through Attenuated α Oscillations.

    Science.gov (United States)

    van Ede, Freek; Niklaus, Marcel; Nobre, Anna C

    2017-01-11

    Although working memory is generally considered a highly dynamic mnemonic store, popular laboratory tasks used to understand its psychological and neural mechanisms (such as change detection and continuous reproduction) often remain relatively "static," involving the retention of a set number of items throughout a shared delay interval. In the current study, we investigated visual working memory in a more dynamic setting, and assessed the following: (1) whether internally guided temporal expectations can dynamically and reversibly prioritize individual mnemonic items at specific times at which they are deemed most relevant; and (2) the neural substrates that support such dynamic prioritization. Participants encoded two differently colored oriented bars into visual working memory to retrieve the orientation of one bar with a precision judgment when subsequently probed. To test for the flexible temporal control to access and retrieve remembered items, we manipulated the probability for each of the two bars to be probed over time, and recorded EEG in healthy human volunteers. Temporal expectations had a profound influence on working memory performance, leading to faster access times as well as more accurate orientation reproductions for items that were probed at expected times. Furthermore, this dynamic prioritization was associated with the temporally specific attenuation of contralateral α (8-14 Hz) oscillations that, moreover, predicted working memory access times on a trial-by-trial basis. We conclude that attentional prioritization in working memory can be dynamically steered by internally guided temporal expectations, and is supported by the attenuation of α oscillations in task-relevant sensory brain areas. In dynamic, everyday-like, environments, flexible goal-directed behavior requires that mental representations that are kept in an active (working memory) store are dynamic, too. We investigated working memory in a more dynamic setting than is conventional

  6. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  7. The Role of Visual and Auditory Temporal Processing for Chinese Children with Developmental Dyslexia

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Wong, Simpson W. L.; Cheung, Him; Penney, Trevor B.; Ho, Connie S. -H.

    2008-01-01

    This study examined temporal processing in relation to Chinese reading acquisition and impairment. The performances of 26 Chinese primary school children with developmental dyslexia on tasks of visual and auditory temporal order judgement, rapid naming, visual-orthographic knowledge, morphological, and phonological awareness were compared with…

  8. Visual temporal processing in dyslexia and the magnocellular deficit theory: the need for speed?

    Science.gov (United States)

    McLean, Gregor M T; Stuart, Geoffrey W; Coltheart, Veronika; Castles, Anne

    2011-12-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore temporal aspects of magnocellular functioning in 40 children with dyslexia and 42 age-matched controls (aged 7-11). The relationship between magnocellular temporal resolution and higher-level aspects of visual temporal processing including inspection time, single and dual-target (attentional blink) RSVP performance, go/no-go reaction time, and rapid naming was also assessed. The Dyslexia group exhibited significant deficits in magnocellular temporal resolution compared with controls, but the two groups did not differ in parvocellular temporal resolution. Despite the significant group differences, associations between magnocellular temporal resolution and reading ability were relatively weak, and links between low-level temporal resolution and reading ability did not appear specific to the magnocellular system. Factor analyses revealed that a collective Perceptual Speed factor, involving both low-level and higher-level visual temporal processing measures, accounted for unique variance in reading ability independently of phonological processing, rapid naming, and general ability.

  9. Neurons in cortical area MST remap the memory trace of visual motion across saccadic eye movements.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2014-05-27

    Perception of a stable visual world despite eye motion requires integration of visual information across saccadic eye movements. To investigate how the visual system deals with localization of moving visual stimuli across saccades, we observed spatiotemporal changes of receptive fields (RFs) of motion-sensitive neurons across periods of saccades in the middle temporal (MT) and medial superior temporal (MST) areas. We found that the location of the RFs moved with shifts of eye position due to saccades, indicating that motion-sensitive neurons in both areas have retinotopic RFs across saccades. Different characteristic responses emerged when the moving visual stimulus was turned off before the saccades. For MT neurons, virtually no response was observed after the saccade, suggesting that the responses of these neurons simply reflect the reafferent visual information. In contrast, most MST neurons increased their firing rates when a saccade brought the location of the visual stimulus into their RFs, where the visual stimulus itself no longer existed. These findings suggest that the responses of such MST neurons after saccades were evoked by a memory of the stimulus that had preexisted in the postsaccadic RFs ("memory remapping"). A delayed-saccade paradigm further revealed that memory remapping in MST was linked to the saccade itself, rather than to a shift in attention. Thus, the visual motion information across saccades was integrated in spatiotopic coordinates and represented in the activity of MST neurons. This is likely to contribute to the perception of a stable visual world in the presence of eye movements.

  10. A Visual Analytics Approach for Extracting Spatio-Temporal Urban Mobility Information from Mobile Network Traffic

    Directory of Open Access Journals (Sweden)

    Euro Beinat

    2012-11-01

    Full Text Available In this paper we present a visual analytics approach for deriving spatio-temporal patterns of collective human mobility from a vast mobile network traffic data set. More than 88 million movements between pairs of radio cells—so-called handovers—served as a proxy for more than two months of mobility within four urban test areas in Northern Italy. In contrast to previous work, our approach relies entirely on visualization and mapping techniques, implemented in several software applications. We purposefully avoid statistical or probabilistic modeling and, nonetheless, reveal characteristic and exceptional mobility patterns. The results show, for example, surprising similarities and symmetries amongst the total mobility and people flows between the test areas. Moreover, the exceptional patterns detected can be associated to real-world events such as soccer matches. We conclude that the visual analytics approach presented can shed new light on large-scale collective urban mobility behavior and thus helps to better understand the “pulse” of dynamic urban systems.

  11. Visualization of spatial-temporal data based on 3D virtual scene

    Science.gov (United States)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  12. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Science.gov (United States)

    Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C

    2015-01-01

    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  13. Anatomical pathways for auditory memory II: Information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Directory of Open Access Journals (Sweden)

    Monica eMunoz-Lopez

    2015-05-01

    Full Text Available Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 minutes. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 seconds. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys’ auditory memory performance. It is possible, therefore, that the anatomical pathways differ. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC. We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG, and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY and anterograde (10% BDA 10,000 MW tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex, and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  14. Measuring temporal summation in visual detection with a single-photon source.

    Science.gov (United States)

    Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G

    2017-11-01

    Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Spatio-Temporal Database of Places Located in the Border Area

    Directory of Open Access Journals (Sweden)

    Albina Mościcka

    2018-03-01

    Full Text Available As a result of changes in boundaries, the political affiliation of locations also changes. Data on such locations are now collected in datasets with reference to the present or to the past space. Therefore, they can refer to localities that either no longer exist, have a different name now, or lay outside of the current borders of the country. Moreover, thematic data describing the past are related to events, customs, items that are always “somewhere”. Storytelling about the past is incomplete without knowledge about the places in which the given story has happened. Therefore, the objective of the article is to discuss the concept of spatio-temporal database for border areas as an “engine” for visualization of thematic data in time-oriented geographical space. The paper focuses on studying the place names on the Polish-Ukrainian border, analyzing the changes that have occurred in this area over the past 80 years (where there were three different countries during this period, and defining the changeability rules. As a result of the research, the architecture of spatio-temporal databases is defined, as well as the rules for using them for data geovisualisation in historical context.

  16. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    Science.gov (United States)

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of

  17. A Headset Method for Measuring the Visual Temporal Discrimination Threshold in Cervical Dystonia

    Directory of Open Access Journals (Sweden)

    Anna Molloy

    2014-07-01

    Full Text Available Background: The visual temporal discrimination threshold (TDT is the shortest time interval at which one can determine two stimuli to be asynchronous and meets criteria for a valid endophenotype in adult‐onset idiopathic focal dystonia, a poorly penetrant disorder. Temporal discrimination is assessed in the hospital laboratory; in unaffected relatives of multiplex adult‐onset dystonia patients distance from the hospital is a barrier to data acquisition. We devised a portable headset method for visual temporal discrimination determination and our aim was to validate this portable tool against the traditional laboratory‐based method in a group of patients and in a large cohort of healthy controls. Methods: Visual TDTs were examined in two groups 1 in 96 healthy control participants divided by age and gender, and 2 in 33 cervical dystonia patients, using two methods of data acquisition, the traditional table‐top laboratory‐based system, and the novel portable headset method. The order of assessment was randomized in the control group. The results obtained by each technique were compared. Results: Visual temporal discrimination in healthy control participants demonstrated similar age and gender effects by the headset method as found by the table‐top examination. There were no significant differences between visual TDTs obtained using the two methods, both for the control participants and for the cervical dystonia patients. Bland–Altman testing showed good concordance between the two methods in both patients and in controls.Discussion: The portable headset device is a reliable and accurate method for visual temporal discrimination testing for use outside the laboratory, and will facilitate increased TDT data collection outside of the hospital setting. This is of particular importance in multiplex families where data collection in all available members of the pedigree is important for exome sequencing studies.

  18. Role of inferior temporal neurons in visual memory. II. Multiplying temporal waveforms related to vision and memory.

    Science.gov (United States)

    Eskandar, E N; Optican, L M; Richmond, B J

    1992-10-01

    1. In the companion paper we reported on the activity of neurons in the inferior temporal (IT) cortex during a sequential pattern matching task. In this task a sample stimulus was followed by a test stimulus that was either a match or a nonmatch. Many of the neurons encoded information about the patterns of both current and previous stimuli in the temporal modulation of their responses. 2. A simple information processing model of visual memory can be formed with just four steps: 1) encode the current stimulus; 2) recall the code of a remembered stimulus; 3) compare the two codes; 4) and decide whether they are similar or different. The analysis presented in the first paper suggested that some IT neurons were performing the comparison step of visual memory. 3. We propose that IT neurons participate in the comparison of temporal waveforms related to vision and memory by multiplying them together. This product could form the basis of a crosscorrelation-based comparison. 4. We tested our hypothesis by fitting a simple multiplicative model to data from IT neurons. The model generated waveforms in separate memory and visual channels. The waveforms arising from the two channels were then multiplied on a point by point basis to yield the output waveform. The model was fitted to the actual neuronal data by a gradient descent method to find the best fit waveforms that also had the lowest total energy. 5. The multiplicative model fit the neuronal responses quite well. The multiplicative model made consistently better predictions of the actual response waveforms than did an additive model. Furthermore, the fit was better when the actual relationship between the responses and the sample and test stimuli were preserved than when that relationship was randomized. 6. We infer from the superior fit of the multiplicative model that IT neurons are multiplying temporally modulated waveforms arising from separate visual and memory systems in the comparison step of visual memory.

  19. Temporal integration windows for naturalistic visual sequences.

    Directory of Open Access Journals (Sweden)

    Scott L Fairhall

    Full Text Available There is increasing evidence that the brain possesses mechanisms to integrate incoming sensory information as it unfolds over time-periods of 2-3 seconds. The ubiquity of this mechanism across modalities, tasks, perception and production has led to the proposal that it may underlie our experience of the subjective present. A critical test of this claim is that this phenomenon should be apparent in naturalistic visual experiences. We tested this using movie-clips as a surrogate for our day-to-day experience, temporally scrambling them to require (re- integration within and beyond the hypothesized 2-3 second interval. Two independent experiments demonstrate a step-wise increase in the difficulty to follow stimuli at the hypothesized 2-3 second scrambling condition. Moreover, only this difference could not be accounted for by low-level visual properties. This provides the first evidence that this 2-3 second integration window extends to complex, naturalistic visual sequences more consistent with our experience of the subjective present.

  20. Effective Connectivity from Early Visual Cortex to Posterior Occipitotemporal Face Areas Supports Face Selectivity and Predicts Developmental Prosopagnosia.

    Science.gov (United States)

    Lohse, Michael; Garrido, Lucia; Driver, Jon; Dolan, Raymond J; Duchaine, Bradley C; Furl, Nicholas

    2016-03-30

    Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face

  1. Visual field defects after temporal lobe resection for epilepsy

    DEFF Research Database (Denmark)

    Steensberg, Alvilda T; Olsen, Ane Sophie; Litman, Minna

    2018-01-01

    PURPOSE: To determine visual field defects (VFDs) using methods of varying complexity and compare results with subjective symptoms in a population of newly operated temporal lobe epilepsy patients. METHODS: Forty patients were included in the study. Two patients failed to perform VFD testing...... symptoms were only reported by 28% of the patients with a VFD and in two of eight (sensitivity=25%) with a severe VFD. Most patients (86%) considered VFD information mandatory. CONCLUSION: VFD continue to be a frequent adverse event after epilepsy surgery in the medial temporal lobe and may affect...

  2. Visual field defects after temporal lobe resection for epilepsy.

    Science.gov (United States)

    Steensberg, Alvilda T; Olsen, Ane Sophie; Litman, Minna; Jespersen, Bo; Kolko, Miriam; Pinborg, Lars H

    2018-01-01

    To determine visual field defects (VFDs) using methods of varying complexity and compare results with subjective symptoms in a population of newly operated temporal lobe epilepsy patients. Forty patients were included in the study. Two patients failed to perform VFD testing. Humphrey Field Analyzer (HFA) perimetry was used as the gold standard test to detect VFDs. All patients performed a web-based visual field test called Damato Multifixation Campimetry Online (DMCO). A bedside confrontation visual field examination ad modum Donders was extracted from the medical records in 27/38 patients. All participants had a consultation by an ophthalmologist. A questionnaire described the subjective complaints. A VFD in the upper quadrant was demonstrated with HFA in 29 (76%) of the 38 patients after surgery. In 27 patients tested ad modum Donders, the sensitivity of detecting a VFD was 13%. Eight patients (21%) had a severe VFD similar to a quadrant anopia, thus, questioning their permission to drive a car. In this group of patients, a VFD was demonstrated in one of five (sensitivity=20%) ad modum Donders and in seven of eight (sensitivity=88%) with DMCO. Subjective symptoms were only reported by 28% of the patients with a VFD and in two of eight (sensitivity=25%) with a severe VFD. Most patients (86%) considered VFD information mandatory. VFD continue to be a frequent adverse event after epilepsy surgery in the medial temporal lobe and may affect the permission to drive a car in at least one in five patients. Subjective symptoms and bedside visual field testing ad modum Donders are not sensitive to detect even a severe VFD. Newly developed web-based visual field test methods appear sensitive to detect a severe VFD but perimetry remains the golden standard for determining if visual standards for driving is fulfilled. Patients consider VFD information as mandatory. Copyright © 2017. Published by Elsevier Ltd.

  3. Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.

    Science.gov (United States)

    Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris

    2016-05-04

    Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  5. Saccade-synchronized rapid attention shifts in macaque visual cortical area MT.

    Science.gov (United States)

    Yao, Tao; Treue, Stefan; Krishna, B Suresh

    2018-03-06

    While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades.

  6. Spatio-temporal flow maps for visualizing movement and contact patterns

    Directory of Open Access Journals (Sweden)

    Bing Ni

    2017-03-01

    Full Text Available The advanced telecom technologies and massive volumes of intelligent mobile phone users have yielded a huge amount of real-time data of people’s all-in-one telecommunication records, which we call telco big data. With telco data and the domain knowledge of an urban city, we are now able to analyze the movement and contact patterns of humans in an unprecedented scale. Flow map is widely used to display the movements of humans from one single source to multiple destinations by representing locations as nodes and movements as edges. However, it fails the task of visualizing both movement and contact data. In addition, analysts often need to compare and examine the patterns side by side, and do various quantitative analysis. In this work, we propose a novel spatio-temporal flow map layout to visualize when and where people from different locations move into the same places and make contact. We also propose integrating the spatiotemporal flow maps into existing spatiotemporal visualization techniques to form a suite of techniques for visualizing the movement and contact patterns. We report a potential application the proposed techniques can be applied to. The results show that our design and techniques properly unveil hidden information, while analysis can be achieved efficiently. Keywords: Spatio-temporal data, Flow map, Urban mobility

  7. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    Science.gov (United States)

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  8. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  9. Adjusted functional boxplots for spatio-temporal data visualization and outlier detection

    KAUST Repository

    Sun, Ying; Genton, Marc G.

    2011-01-01

    This article proposes a simulation-based method to adjust functional boxplots for correlations when visualizing functional and spatio-temporal data, as well as detecting outliers. We start by investigating the relationship between the spatio

  10. Integrating what and when across the primate medial temporal lobe.

    Science.gov (United States)

    Naya, Yuji; Suzuki, Wendy A

    2011-08-05

    Episodic memory or memory for the detailed events in our lives is critically dependent on structures of the medial temporal lobe (MTL). A fundamental component of episodic memory is memory for the temporal order of items within an episode. To understand the contribution of individual MTL structures to temporal-order memory, we recorded single-unit activity and local field potential from three MTL areas (hippocampus and entorhinal and perirhinal cortex) and visual area TE as monkeys performed a temporal-order memory task. Hippocampus provided incremental timing signals from one item presentation to the next, whereas perirhinal cortex signaled the conjunction of items and their relative temporal order. Thus, perirhinal cortex appeared to integrate timing information from hippocampus with item information from visual sensory area TE.

  11. Visualization of subtle temporal bone structures. Comparison of cone beam CT and MDCT

    International Nuclear Information System (INIS)

    Pein, M.K.; Plontke, S.K.; Brandt, S.; Koesling, S.

    2014-01-01

    The purpose of this study was to compare the visualization of subtle, non-pathological temporal bone structures on cone beam computed tomography (CBCT) and multi-detector computed tomography (MDCT) in vivo. Temporal bone studies of images from 38 patients archived in the picture archiving and communication system (PACS) were analyzed (slice thickness MDCT 0.6 mm and CBCT 0.125 mm) of which 23 were imaged by MDCT and 15 by CBCT using optimized standard protocols. Inclusion criteria were normal radiological findings, absence of previous surgery and anatomical variants. Images were evaluated blind by three trained observers. Using a five-point scale the visualization of ten subtle structures of the temporal bone was analyzed. Subtle middle ear structures showed a tendency to be more easily distinguishable by CBCT with significantly better visualization of the tendon of the stapedius muscle and the crura of the stapes on CBCT (p = 0.003 and p = 0.033, respectively). In contrast, inner ear components, such as the osseus spiral lamina and the modiolus tended to be better detectable on MDCT, showing significant differences for the osseous spiral lamina (p = 0.001). The interrater reliability was 0.73 (Cohen's kappa coefficient) and intraobserver reliability was 0.89. The use of CBCT and MDCT allows equivalent and excellent imaging results if optimized protocols are chosen. With both imaging techniques subtle temporal bone structures could be visualized with a similar degree of definition. In vivo differences do not seem to be as large as suggested in several previous studies. (orig.) [de

  12. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    Science.gov (United States)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and

  13. Visual Statistical Learning Works after Binding the Temporal Sequences of Shapes and Spatial Positions

    Directory of Open Access Journals (Sweden)

    Osamu Watanabe

    2011-05-01

    Full Text Available The human visual system can acquire the statistical structures in temporal sequences of object feature changes, such as changes in shape, color, and its combination. Here we investigate whether the statistical learning for spatial position and shape changes operates separately or not. It is known that the visual system processes these two types of information separately; the spatial information is processed in the parietal cortex, whereas object shapes and colors are detected in the temporal pathway, and, after that, we perceive bound information in the two streams. We examined whether the statistical learning operates before or after binding the shape and the spatial information by using the “re-paired triplet” paradigm proposed by Turk-Browne, Isola, Scholl, and Treat (2008. The result showed that observers acquired combined sequences of shape and position changes, but no statistical information in individual sequence was obtained. This finding suggests that the visual statistical learning works after binding the temporal sequences of shapes and spatial structures and would operate in the higher-order visual system; this is consistent with recent ERP (Abla & Okanoya, 2009 and fMRI (Turk-Browne, Scholl, Chun, & Johnson, 2009 studies.

  14. Temporal Stability of Visual Search-Driven Biometrics

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  15. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    Science.gov (United States)

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Temporally evolving gain mechanisms of attention in macaque area V4.

    Science.gov (United States)

    Sani, Ilaria; Santandrea, Elisa; Morrone, Maria Concetta; Chelazzi, Leonardo

    2017-08-01

    Cognitive attention and perceptual saliency jointly govern our interaction with the environment. Yet, we still lack a universally accepted account of the interplay between attention and luminance contrast, a fundamental dimension of saliency. We measured the attentional modulation of V4 neurons' contrast response functions (CRFs) in awake, behaving macaque monkeys and applied a new approach that emphasizes the temporal dynamics of cell responses. We found that attention modulates CRFs via different gain mechanisms during subsequent epochs of visually driven activity: an early contrast-gain, strongly dependent on prestimulus activity changes (baseline shift); a time-limited stimulus-dependent multiplicative modulation, reaching its maximal expression around 150 ms after stimulus onset; and a late resurgence of contrast-gain modulation. Attention produced comparable time-dependent attentional gain changes on cells heterogeneously coding contrast, supporting the notion that the same circuits mediate attention mechanisms in V4 regardless of the form of contrast selectivity expressed by the given neuron. Surprisingly, attention was also sometimes capable of inducing radical transformations in the shape of CRFs. These findings offer important insights into the mechanisms that underlie contrast coding and attention in primate visual cortex and a new perspective on their interplay, one in which time becomes a fundamental factor. NEW & NOTEWORTHY We offer an innovative perspective on the interplay between attention and luminance contrast in macaque area V4, one in which time becomes a fundamental factor. We place emphasis on the temporal dynamics of attentional effects, pioneering the notion that attention modulates contrast response functions of V4 neurons via the sequential engagement of distinct gain mechanisms. These findings advance understanding of attentional influences on visual processing and help reconcile divergent results in the literature. Copyright © 2017 the

  17. Measurement of temporal asymmetries of glucose consumption using linear profiles: reproducibility and comparison with visual analysis

    International Nuclear Information System (INIS)

    Matheja, P.; Kuwert, T.; Schaefers, M.; Schaefers, K.; Schober, O.; Diehl, B.; Stodieck, S.R.G.; Ringelstein, E.B.; Schuierer, G.

    1998-01-01

    The aim of our study was to test the reproducibility of this method and to compare its diagnostic performance to that of visual analysis in patients with complex partial seizures (CPS). Regional cerebral glucose consumption (rCMRGLc) was measured interictally in 25 CPS patients and 10 controls using F-18-deoxyglucose and the positron emission tomography (PET) camera ECAT EXACT 47. The PET scans were visually analyzed for the occurrence of unilateral temporal hypometabolism. Furthermore, rCMRGLc was quantified on six contiguous coronal planes by manually tracing maximal values of temporal glucose consumption, thus creating line profiles of temporal glucose consumption for each side. Indices of asymmetry (ASY) were then calculated from these line profiles in four temporal regions and compared to the corresponding 95% confidence intervals of the control data. All analyses were performed by two observers independently from each other and without knowledge of the clinical findings. The agreement between the two observers with regard to focus lateralization was 96% on visual analysis and 100% on quantitative analysis. There was an excellent agreement with regard to focus lateralization between visual and quantitative evaluation. (orig.) [de

  18. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  19. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    Science.gov (United States)

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  1. Visualization of temporal aspects of tsetse fly eradication in ...

    African Journals Online (AJOL)

    Pattern of how they are applied in time was provided in the animation representation. Further information on areas where different techniques were applied on different years is interactively visualized. Visualization of infestation changes in time was also provided by animation representation. Visualization of eradication ...

  2. BUILDING A BILLION SPATIO-TEMPORAL OBJECT SEARCH AND VISUALIZATION PLATFORM

    Directory of Open Access Journals (Sweden)

    D. Kakkar

    2017-10-01

    Full Text Available With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC, an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  3. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    Science.gov (United States)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  4. More Than Meets the Eye: The Merging of Perceptual and Conceptual Knowledge in the Anterior Temporal Face Area.

    Directory of Open Access Journals (Sweden)

    Jessica A. Collins

    2016-05-01

    Full Text Available An emerging body of research has supported the existence of a small face sensitive region in the ventral anterior temporal lobe (ATL, referred to here as the anterior temporal face area. The contribution of this region in the greater face-processing network remains poorly understood. The goal of the present study was to test the relative sensitivity of this region to perceptual as well as conceptual information about people and objects. We contrasted the sensitivity of this region to that of two highly-studied face-sensitive regions, the fusiform face area and the occipital face area, as well as a control region in early visual cortex. Our findings revealed that multivoxel activity patterns in the anterior temporal face area contain information about facial identity, as well as conceptual attributes such as one’s occupation. The sensitivity of this region to the conceptual attributes of people was greater than that of posterior face processing regions. In addition, the anterior temporal face area overlaps with voxels that contain information about the conceptual attributes of concrete objects, supporting a generalized role of the ventral ATLs in the identification and conceptual processing of multiple stimulus classes.1IntroductionOver a decade of neuroimaging work has characterized the neural basis of face perception and identified several nodes that preferentially respond to faces relative to other objects (Haxby, Hoffman, & Gobbini, 2000; Nancy Kanwisher & Yovel, 2006. Most of this work has focused on the fusiform face area (FFA and the occipital face area (OFA (Kanwisher, McDermott, & Chun, 1997; Kanwisher & Yovel, 2006; Pitcher, Walsh, Yovel, & Duchaine, 2007, however an emerging literature has implicated an anterior temporal face area, on the ventral surface of the right anterior temporal lobes (vATLs in or near perirhinal cortex, in facial processing (Avidan et al., 2013; Pinsk et al., 2009; Rajimehr, Young, & Tootell, 2009; Tsao

  5. Temporal expectancy in the context of a theory of visual attention.

    Science.gov (United States)

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-10-19

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue-stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s(-1)) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations.

  6. Temporal expectancy in the context of a theory of visual attention

    Science.gov (United States)

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-01-01

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue–stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s−1) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations. PMID:24018716

  7. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    Science.gov (United States)

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  8. Visualizing the Past: The Design of a Temporally Enabled Map for Presentation (TEMPO

    Directory of Open Access Journals (Sweden)

    Nathan Prestopnik

    2012-01-01

    Full Text Available We present a design case for a prototype visualization tool called the Temporally Enabled Map for Presentation (TEMPO. Designed for use in the lecture classroom, TEMPO is an interactive animated map that addressed a common problem in military history: the shortcomings of traditional static (non-interactive, non-animated maps. Static maps show spatial elements well, but cannot do more than approximate temporal events using multiple views, movement arrows, and the like. TEMPO provides a more complete view of past historical events by showing them from start to finish. In our design case we describe our development process, which included consultation with military history domain experts, classroom observations, application of techniques derived from visualization and Human-Computer Interaction (HCI literature and theory. Our design case shows how the design of an educational tool can motivate scholarly evaluation, and we describe how some theories were first embraced and then rejected as design circumstances required. Finally, we explore a future direction for TEMPO, tools to support creative interactions with visualizations where students or instructors can learn by visualizing historical events for themselves. A working version of the finished TEMPO artifact is included as an interactive element in this document.

  9. Visual Working Memory Storage Recruits Sensory Processing Areas

    NARCIS (Netherlands)

    Gayet, Surya; Paffen, Chris L E; Van der Stigchel, Stefan

    Human visual processing is subject to a dynamic influx of visual information. Visual working memory (VWM) allows for maintaining relevant visual information available for subsequent behavior. According to the dominating view, VWM recruits sensory processing areas to maintain this visual information

  10. Visual working memory storage recruits sensory processing areas

    NARCIS (Netherlands)

    Gayet, S.; Paffen, C.L.E.; Stigchel, S. van der

    2018-01-01

    Human visual processing is subject to a dynamic influx of visual information. Visual working memory (VWM) allows for maintaining relevant visual information available for subsequent behavior. According to the dominating view, VWM recruits sensory processing areas to maintain this visual information

  11. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  12. Visual working memory capacity and the medial temporal lobe.

    Science.gov (United States)

    Jeneson, Annette; Wixted, John T; Hopkins, Ramona O; Squire, Larry R

    2012-03-07

    Patients with medial temporal lobe (MTL) damage are sometimes impaired at remembering visual information across delays as short as a few seconds. Such impairments could reflect either impaired visual working memory capacity or impaired long-term memory (because attention has been diverted or because working memory capacity has been exceeded). Using a standard change-detection task, we asked whether visual working memory capacity is intact or impaired after MTL damage. Five patients with hippocampal lesions and one patient with large MTL lesions saw an array of 1, 2, 3, 4, or 6 colored squares, followed after 3, 4, or 8 s by a second array where one of the colored squares was cued. The task was to decide whether the cued square had the same color as the corresponding square in the first array or a different color. At the 1 s delay typically used to assess working memory capacity, patients performed as well as controls at all array sizes. At the longer delays, patients performed as well as controls at small array sizes, thought to be within the capacity limit, and worse than controls at large array sizes, thought to exceed the capacity limit. The findings suggest that visual working memory capacity in humans is intact after damage to the MTL structures and that damage to these structures impairs performance only when visual working memory is insufficient to support performance.

  13. The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type

    Science.gov (United States)

    Kiefer, Markus; Kammer, Thomas

    2017-01-01

    One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions

  14. Functional differentiation of macaque visual temporal cortical neurons using a parametric action space.

    Science.gov (United States)

    Vangeneugden, Joris; Pollick, Frank; Vogels, Rufin

    2009-03-01

    Neurons in the rostral superior temporal sulcus (STS) are responsive to displays of body movements. We employed a parametric action space to determine how similarities among actions are represented by visual temporal neurons and how form and motion information contributes to their responses. The stimulus space consisted of a stick-plus-point-light figure performing arm actions and their blends. Multidimensional scaling showed that the responses of temporal neurons represented the ordinal similarity between these actions. Further tests distinguished neurons responding equally strongly to static presentations and to actions ("snapshot" neurons), from those responding much less strongly to static presentations, but responding well when motion was present ("motion" neurons). The "motion" neurons were predominantly found in the upper bank/fundus of the STS, and "snapshot" neurons in the lower bank of the STS and inferior temporal convexity. Most "motion" neurons showed strong response modulation during the course of an action, thus responding to action kinematics. "Motion" neurons displayed a greater average selectivity for these simple arm actions than did "snapshot" neurons. We suggest that the "motion" neurons code for visual kinematics, whereas the "snapshot" neurons code for form/posture, and that both can contribute to action recognition, in agreement with computation models of action recognition.

  15. Temporal and spatial predictability of an irrelevant event differently affect detection and memory of items in a visual sequence

    Directory of Open Access Journals (Sweden)

    Junji eOhyama

    2016-02-01

    Full Text Available We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition, it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection reaction times were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images.

  16. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Science.gov (United States)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  17. Representation of Glossy Material Surface in Ventral Superior Temporal Sulcal Area of Common Marmosets.

    Science.gov (United States)

    Miyakawa, Naohisa; Banno, Taku; Abe, Hiroshi; Tani, Toshiki; Suzuki, Wataru; Ichinohe, Noritaka

    2017-01-01

    The common marmoset ( Callithrix jacchus ) is one of the smallest species of primates, with high visual recognition abilities that allow them to judge the identity and quality of food and objects in their environment. To address the cortical processing of visual information related to material surface features in marmosets, we presented a set of stimuli that have identical three-dimensional shapes (bone, torus or amorphous) but different material appearances (ceramic, glass, fur, leather, metal, stone, wood, or matte) to anesthetized marmoset, and recorded multiunit activities from an area ventral to the superior temporal sulcus (STS) using multi-shanked, and depth resolved multi-electrode array. Out of 143 visually responsive multiunits recorded from four animals, 29% had significant main effect only of the material, 3% only of the shape and 43% of both the material and the shape. Furthermore, we found neuronal cluster(s), in which most cells: (1) showed a significant main effect in material appearance; (2) the best stimulus was a glossy material (glass or metal); and (3) had reduced response to the pixel-shuffled version of the glossy material images. The location of the gloss-selective area was in agreement with previous macaque studies, showing activation in the ventral bank of STS. Our results suggest that perception of gloss is an important ability preserved across wide range of primate species.

  18. Application of digital tomosynthesis to radiographic diagnosis of the temporal bone. Studies on visualization in normal subjects

    International Nuclear Information System (INIS)

    Kawai, Takashi

    1995-01-01

    To examine the usefulness of digital tomosynthesis for conducting radiographic diagnosis of the temporal bone, visualization of various aural structures such as the semicircular canals, cochlea, vestibular apparatus, ossicles of the ear and facial nerve canal was examined in 18 volunteers. The visualization of temporal bone specimens by digital tomosynthesis and CT images (slice thickness: 1.5 mm) was compared. The results showed that this system (Digital Tomosynthesis) produced clear images of bony labyrinthine structures such as the semicircular canals, cochlea, and vestibular apparatus. Visualization of the ossicles was also clear, and their continuity could be comprehended better than on CT images. This system also provided good visualization of the labyrinthine and tympanic parts of the facial nerve canal, although CT images had greater sharpness. Visualization of the lower half of the mastoid part was poor with this system. (author)

  19. The temporal dynamics of implicit processing of non-letter, letter, and word-forms in the human visual cortex

    Directory of Open Access Journals (Sweden)

    Lawrence Gregory Appelbaum

    2009-11-01

    Full Text Available The decoding of visually presented line segments into letters, and letters into words, is critical to fluent reading abilities. Here we investigate the temporal dynamics of visual orthographic processes, focusing specifically on right hemisphere contributions and interactions between the hemispheres involved in the implicit processing of visually presented words, consonants, false fonts, and symbolic strings. High-density EEG was recorded while participants detected infrequent, simple, perceptual targets (dot strings embedded amongst a of character strings. Beginning at 130ms, orthographic and non-orthographic stimuli were distinguished by a sequence of ERP effects over occipital recording sites. These early latency occipital effects were dominated by enhanced right-sided negative-polarity activation for non-orthographic stimuli that peaked at around 180ms. This right-sided effect was followed by bilateral positive occipital activity for false-fonts, but not symbol strings. Moreover the size of components of this later positive occipital wave was inversely correlated with the right-sided ROcc180 wave, suggesting that subjects who had larger early right-sided activation for non-orthographic stimuli had less need for more extended bilateral (e.g. interhemispheric processing of those stimuli shortly later. Additional early (130-150ms negative-polarity activity over left occipital cortex and longer-latency centrally distributed responses (>300ms were present, likely reflecting implicit activation of the previously reported ‘visual-word-form’ area and N400-related responses, respectively. Collectively, these results provide a close look at some relatively unexplored portions of the temporal flow of information processing in the brain related to the implicit processing of potentially linguistic information and provide valuable information about the interactions between hemispheres supporting visual orthographic processing.

  20. Multifocal Visual Evoked Potential in Eyes With Temporal Hemianopia From Chiasmal Compression: Correlation With Standard Automated Perimetry and OCT Findings.

    Science.gov (United States)

    Sousa, Rafael M; Oyamada, Maria K; Cunha, Leonardo P; Monteiro, Mário L R

    2017-09-01

    To verify whether multifocal visual evoked potential (mfVEP) can differentiate eyes with temporal hemianopia due to chiasmal compression from healthy controls. To assess the relationship between mfVEP, standard automated perimetry (SAP), and Fourier domain-optical coherence tomography (FD-OCT) macular and peripapillary retinal nerve fiber layer (RNFL) thickness measurements. Twenty-seven eyes with permanent temporal visual field (VF) defects from chiasmal compression on SAP and 43 eyes of healthy controls were submitted to mfVEP and FD-OCT scanning. Multifocal visual evoked potential was elicited using a stimulus pattern of 60 sectors and the responses were averaged for the four quadrants and two hemifields. Optical coherence tomography macular measurements were averaged in quadrants and halves, while peripapillary RNFL thickness was averaged in four sectors around the disc. Visual field loss was estimated in four quadrants and each half of the 24-2 strategy test points. Multifocal visual evoked potential measurements in the two groups were compared using generalized estimated equations, and the correlations between mfVEP, VF, and OCT findings were quantified. Multifocal visual evoked potential-measured temporal P1 and N2 amplitudes were significantly smaller in patients than in controls. No significant difference in amplitude was observed for nasal parameters. A significant correlation was found between mfVEP amplitudes and temporal VF loss, and between mfVEP amplitudes and the corresponding OCT-measured macular and RNFL thickness parameters. Multifocal visual evoked potential amplitude parameters were able to differentiate eyes with temporal hemianopia from controls and were significantly correlated with VF and OCT findings, suggesting mfVEP is a useful tool for the detection of visual abnormalities in patients with chiasmal compression.

  1. Resolution of spatial and temporal visual attention in infants with fragile X syndrome

    OpenAIRE

    Farzin, Faraz; Rivera, Susan M.; Whitney, David

    2011-01-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal–parietal networks of the brain. The goal of the current study was to examine whether reduced resolution of spatial and/or temporal visual attention may underlie perceptual def...

  2. Right hemispheric dominance of visual phenomena evoked by intracerebral stimulation of the human visual cortex.

    Science.gov (United States)

    Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis

    2014-07-01

    Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.

  3. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated.

    Directory of Open Access Journals (Sweden)

    Merle-Marie Ahrens

    Full Text Available Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing. Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design and task-irrelevant (by instruction, and by creating instead endogenous (orthogonal expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech.

  4. A collaborative large spatio-temporal data visual analytics architecture for emergence response

    International Nuclear Information System (INIS)

    Guo, D; Li, J; Zhou, Y; Cao, H

    2014-01-01

    The unconventional emergency, usually outbreaks more suddenly, and is diffused more quickly, but causes more secondary damage and derives more disaster than what it is usually expected. The data volume and urgency of emergency exceeds the capacity of current emergency management systems. In this paper, we propose a three-tier collaborative spatio-temporal visual analysis architecture to support emergency management. The prototype system, based on cloud computation environment, supports aggregation of massive unstructured and semi-structured data, integration of various computing model sand algorithms; collaborative visualization and visual analytics among users with a diversity of backgrounds. The distributed data in 100TB scale is integrated in a unified platform and shared with thousands of experts and government agencies by nearly 100 models. The users explore, visualize and analyse the big data and make a collaborative countermeasures to emergencies

  5. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part I: the topography of light detection and temporal-information processing.

    Science.gov (United States)

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Temporal performance parameters vary across the visual field. Their topographical distributions relative to each other and relative to basic visual performance measures and their relative change over the life span are unknown. Our goal was to characterize the topography and age-related change of temporal performance. We acquired visual field maps in 95 healthy participants (age: 10-90 years): perimetric thresholds, double-pulse resolution (DPR), reaction times (RTs), and letter contrast thresholds. DPR and perimetric thresholds increased with eccentricity and age; the periphery showed a more pronounced age-related increase than the center. RT increased only slightly and uniformly with eccentricity. It remained almost constant up to the age of 60, a marked change occurring only above 80. Overall, age was a poor predictor of functionality. Performance decline could be explained only in part by the aging of the retina and optic media. In Part II, we therefore examine higher visual and cognitive functions.

  6. Adjusted functional boxplots for spatio-temporal data visualization and outlier detection

    KAUST Repository

    Sun, Ying

    2011-10-24

    This article proposes a simulation-based method to adjust functional boxplots for correlations when visualizing functional and spatio-temporal data, as well as detecting outliers. We start by investigating the relationship between the spatio-temporal dependence and the 1.5 times the 50% central region empirical outlier detection rule. Then, we propose to simulate observations without outliers on the basis of a robust estimator of the covariance function of the data. We select the constant factor in the functional boxplot to control the probability of correctly detecting no outliers. Finally, we apply the selected factor to the functional boxplot of the original data. As applications, the factor selection procedure and the adjusted functional boxplots are demonstrated on sea surface temperatures, spatio-temporal precipitation and general circulation model (GCM) data. The outlier detection performance is also compared before and after the factor adjustment. © 2011 John Wiley & Sons, Ltd.

  7. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    Science.gov (United States)

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for

  8. Person perception involves functional integration between the extrastriate body area and temporal pole.

    Science.gov (United States)

    Greven, Inez M; Ramsey, Richard

    2017-02-01

    The majority of human neuroscience research has focussed on understanding functional organisation within segregated patches of cortex. The ventral visual stream has been associated with the detection of physical features such as faces and body parts, whereas the theory-of-mind network has been associated with making inferences about mental states and underlying character, such as whether someone is friendly, selfish, or generous. To date, however, it is largely unknown how such distinct processing components integrate neural signals. Using functional magnetic resonance imaging and connectivity analyses, we investigated the contribution of functional integration to social perception. During scanning, participants observed bodies that had previously been associated with trait-based or neutral information. Additionally, we independently localised the body perception and theory-of-mind networks. We demonstrate that when observing someone who cues the recall of stored social knowledge compared to non-social knowledge, a node in the ventral visual stream (extrastriate body area) shows greater coupling with part of the theory-of-mind network (temporal pole). These results show that functional connections provide an interface between perceptual and inferential processing components, thus providing neurobiological evidence that supports the view that understanding the visual environment involves interplay between conceptual knowledge and perceptual processing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Cortical activation during Braille reading is influenced by early visual experience in subjects with severe visual disability: a correlational fMRI study.

    Science.gov (United States)

    Melzer, P; Morgan, V L; Pickens, D R; Price, R R; Wall, R S; Ebner, F F

    2001-11-01

    Functional magnetic resonance imaging was performed on blind adults resting and reading Braille. The strongest activation was found in primary somatic sensory/motor cortex on both cortical hemispheres. Additional foci of activation were situated in the parietal, temporal, and occipital lobes where visual information is processed in sighted persons. The regions were differentiated most in the correlation of their time courses of activation with resting and reading. Differences in magnitude and expanse of activation were substantially less significant. Among the traditionally visual areas, the strength of correlation was greatest in posterior parietal cortex and moderate in occipitotemporal, lateral occipital, and primary visual cortex. It was low in secondary visual cortex as well as in dorsal and ventral inferior temporal cortex and posterior middle temporal cortex. Visual experience increased the strength of correlation in all regions except dorsal inferior temporal and posterior parietal cortex. The greatest statistically significant increase, i.e., approximately 30%, was in ventral inferior temporal and posterior middle temporal cortex. In these regions, words are analyzed semantically, which may be facilitated by visual experience. In contrast, visual experience resulted in a slight, insignificant diminution of the strength of correlation in dorsal inferior temporal cortex where language is analyzed phonetically. These findings affirm that posterior temporal regions are engaged in the processing of written language. Moreover, they suggest that this function is modified by early visual experience. Furthermore, visual experience significantly strengthened the correlation of activation and Braille reading in occipital regions traditionally involved in the processing of visual features and object recognition suggesting a role for visual imagery. Copyright 2001 Wiley-Liss, Inc.

  10. A FRAMEWORK FOR ONLINE SPATIO-TEMPORAL DATA VISUALIZATION BASED ON HTML5

    Directory of Open Access Journals (Sweden)

    B. Mao

    2012-07-01

    Full Text Available Web is entering a new phase – HTML5. New features of HTML5 should be studied for online spatio-temporal data visualization. In the proposed framework, spatio-temporal data is stored in the data server and is sent to user browsers with WebSocket. Public geo-data such as Internet digital map is integrated into the browsers. Then animation is implemented through the canvas object defined by the HTML5 specification. To simulate the spatio-temporal data source, we collected the daily location of 15 users with GPS tracker. The current positions of the users are collected every minute and are recorded in a file. Based on this file, we generate a real time spatio-temporal data source which sends out current user location every second.By enlarging the real time scales by 60 times, we can observe the movement clearly. The data transmitted with WebSocket is the coordinates of users' current positions, which will can be demonstrated in client browsers.

  11. Functional visual fields: relationship of visual field areas to self-reported function.

    Science.gov (United States)

    Subhi, Hikmat; Latham, Keziah; Myint, Joy; Crossland, Michael D

    2017-07-01

    The aim of this study is to relate areas of the visual field to functional difficulties to inform the development of a binocular visual field assessment that can reflect the functional consequences of visual field loss. Fifty-two participants with peripheral visual field loss undertook binocular assessment of visual fields using the 30-2 and 60-4 SITA Fast programs on the Humphrey Field Analyser, and mean thresholds were derived. Binocular visual acuity, contrast sensitivity and near reading performance were also determined. Self-reported overall and mobility function were assessed using the Dutch ICF Activity Inventory. Greater visual field loss (0-60°) was associated with worse self-reported function both overall (R 2 = 0.50; p function (R 2 = 0.61, p function in multiple regression analyses. Superior and inferior visual field areas related similarly to mobility function (R 2 = 0.56, p function in multiple regression analysis. Mean threshold of the binocular visual field to 60° eccentricity is a good predictor of self-reported function overall, and particularly of mobility function. Both the central (0-30°) and peripheral (30-60°) mean threshold are good predictors of self-reported function, but the peripheral (30-0°) field is a slightly better predictor of mobility function, and should not be ignored when considering functional consequences of field loss. The inferior visual field is a slightly stronger predictor of perceived overall and mobility function than the superior field. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  12. Differences in visual vs. verbal memory impairments as a result of focal temporal lobe damage in patients with traumatic brain injury.

    Science.gov (United States)

    Ariza, Mar; Pueyo, Roser; Junqué, Carme; Mataró, María; Poca, María Antonia; Mena, Maria Pau; Sahuquillo, Juan

    2006-09-01

    The aim of the present study was to determine whether the type of lesion in a sample of moderate and severe traumatic brain injury (TBI) was related to material-specific memory impairment. Fifty-nine patients with TBI were classified into three groups according to whether the site of the lesion was right temporal, left temporal or diffuse. Six-months post-injury, visual (Warrington's Facial Recognition Memory Test and Rey's Complex Figure Test) and verbal (Rey's Auditory Verbal Learning Test) memories were assessed. Visual memory deficits assessed by facial memory were associated with right temporal lobe lesion, whereas verbal memory performance assessed with a list of words was related to left temporal lobe lesion. The group with diffuse injury showed both verbal and visual memory impairment. These results suggest a material-specific memory impairment in moderate and severe TBI after focal temporal lesions and a non-specific memory impairment after diffuse damage.

  13. Multiple pathways carry signals from short-wavelength-sensitive ('blue') cones to the middle temporal area of the macaque.

    Science.gov (United States)

    Jayakumar, Jaikishan; Roy, Sujata; Dreher, Bogdan; Martin, Paul R; Vidyasagar, Trichur R

    2013-01-01

    We recorded spike activity of single neurones in the middle temporal visual cortical area (MT or V5) of anaesthetised macaque monkeys. We used flashing, stationary spatially circumscribed, cone-isolating and luminance-modulated stimuli of uniform fields to assess the effects of signals originating from the long-, medium- or short- (S) wavelength-sensitive cone classes. Nearly half (41/86) of the tested MT neurones responded reliably to S-cone-isolating stimuli. Response amplitude in the majority of the neurones tested further (19/28) was significantly reduced, though not always completely abolished, during reversible inactivation of visuotopically corresponding regions of the ipsilateral primary visual cortex (striate cortex, area V1). Thus, the present data indicate that signals originating in S-cones reach area MT, either via V1 or via a pathway that does not go through area V1. We did not find a significant difference between the mean latencies of spike responses of MT neurones to signals that bypass V1 and those that do not; the considerable overlap we observed precludes the use of spike-response latency as a criterion to define the routes through which the signals reach MT.

  14. How the visual brain encodes and keeps track of time.

    Science.gov (United States)

    Salvioni, Paolo; Murray, Micah M; Kalmbach, Lysiann; Bueti, Domenica

    2013-07-24

    Time is embedded in any sensory experience: the movements of a dance, the rhythm of a piece of music, the words of a speaker are all examples of temporally structured sensory events. In humans, if and how visual cortices perform temporal processing remains unclear. Here we show that both primary visual cortex (V1) and extrastriate area V5/MT are causally involved in encoding and keeping time in memory and that this involvement is independent from low-level visual processing. Most importantly we demonstrate that V1 and V5/MT come into play simultaneously and seem to be functionally linked during interval encoding, whereas they operate serially (V1 followed by V5/MT) and seem to be independent while maintaining temporal information in working memory. These data help to refine our knowledge of the functional properties of human visual cortex, highlighting the contribution and the temporal dynamics of V1 and V5/MT in the processing of the temporal aspects of visual information.

  15. A Fresh Look at Spatio-Temporal Remote Sensing Data: Data Formats, Processing Flow, and Visualization

    Science.gov (United States)

    Gens, R.

    2017-12-01

    With increasing number of experimental and operational satellites in orbit, remote sensing based mapping and monitoring of the dynamic Earth has entered into the realm of `big data'. Just the Landsat series of satellites provide a near continuous archive of 45 years of data. The availability of such spatio-temporal datasets has created opportunities for long-term monitoring diverse features and processes operating on the Earth's terrestrial and aquatic systems. Processes such as erosion, deposition, subsidence, uplift, evapotranspiration, urbanization, land-cover regime shifts can not only be monitored and change can be quantified using time-series data analysis. This unique opportunity comes with new challenges in management, analysis, and visualization of spatio-temporal datasets. Data need to be stored in a user-friendly format, and relevant metadata needs to be recorded, to allow maximum flexibility for data exchange and use. Specific data processing workflows need to be defined to support time-series analysis for specific applications. Value-added data products need to be generated keeping in mind the needs of the end-users, and using best practices in complex data visualization. This presentation systematically highlights the various steps for preparing spatio-temporal remote sensing data for time series analysis. It showcases a prototype workflow for remote sensing based change detection that can be generically applied while preserving the application-specific fidelity of the datasets. The prototype includes strategies for visualizing change over time. This has been exemplified using a time-series of optical and SAR images for visualizing the changing glacial, coastal, and wetland landscapes in parts of Alaska.

  16. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  17. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  18. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  19. Rapid visual grouping and figure-ground processing using temporally structured displays.

    Science.gov (United States)

    Cheadle, Samuel; Usher, Marius; Müller, Hermann J

    2010-08-23

    We examine the time course of visual grouping and figure-ground processing. Figure (contour) and ground (random-texture) elements were flickered with different phases (i.e., contour and background are alternated), requiring the observer to group information within a pre-specified time window. It was found this grouping has a high temporal resolution: less than 20ms for smooth contours, and less than 50ms for line conjunctions with sharp angles. Furthermore, the grouping process takes place without an explicit knowledge of the phase of the elements, and it requires a cumulative build-up of information. The results are discussed in relation to the neural mechanism for visual grouping and figure-ground segregation. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  1. Spatio-temporal dependencies between hospital beds, physicians and health expenditure using visual variables and data classification in statistical table

    Science.gov (United States)

    Medyńska-Gulij, Beata; Cybulski, Paweł

    2016-06-01

    This paper analyses the use of table visual variables of statistical data of hospital beds as an important tool for revealing spatio-temporal dependencies. It is argued that some of conclusions from the data about public health and public expenditure on health have a spatio-temporal reference. Different from previous studies, this article adopts combination of cartographic pragmatics and spatial visualization with previous conclusions made in public health literature. While the significant conclusions about health care and economic factors has been highlighted in research papers, this article is the first to apply visual analysis to statistical table together with maps which is called previsualisation.

  2. Spatio-temporal dependencies between hospital beds, physicians and health expenditure using visual variables and data classification in statistical table

    Directory of Open Access Journals (Sweden)

    Medyńska-Gulij Beata

    2016-06-01

    Full Text Available This paper analyses the use of table visual variables of statistical data of hospital beds as an important tool for revealing spatio-temporal dependencies. It is argued that some of conclusions from the data about public health and public expenditure on health have a spatio-temporal reference. Different from previous studies, this article adopts combination of cartographic pragmatics and spatial visualization with previous conclusions made in public health literature. While the significant conclusions about health care and economic factors has been highlighted in research papers, this article is the first to apply visual analysis to statistical table together with maps which is called previsualisation.

  3. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  4. Visual dysfunction, neurodegenerative diseases, and aging.

    Science.gov (United States)

    Jackson, Gregory R; Owsley, Cynthia

    2003-08-01

    The four most common sight-threatening conditions in older adults in North America are cataract, ARM, glaucoma, and diabetic retinopathy. Even in their moderate stages, these conditions cause visual sensory impairments and reductions in health-related quality of life, including difficulties in daily tasks and psychosocial problems. Many older adults are free from these conditions, yet still experience a variety of visual perceptual problems resulting from aging-related changes in the optics of the eye and degeneration of the visual neural pathways. These problems consist of impairments in visual acuity, contrast sensitivity, color discrimination, temporal sensitivity, motion perception, peripheral visual field sensitivity, and visual processing speed. PD causes a progressive loss of dopaminergic cells predominantly in the retina and possibly in other areas of the visual system. This retinal dopamine deficiency produces selective spatial-temporal abnormalities in retinal ganglion cell function, probably arising from altered receptive field organization in the PD retina. The cortical degeneration characteristics of AD, including neurofibrillary tangles and neuritic plaques, also are present in the visual cortical areas, especially in the visual association areas. The most prominent electrophysiologic change in AD is a delay in the P2 component of the flash VEP. Deficits in higher-order visual abilities typically are compromised in AD, including problems with visual attention, perceiving structure from motion, visual memory, visual learning, reading, and object and face perception. There have been reports of a visual variant of AD in which these types of visual problems are the initial and most prominent signs of the disease. Visual sensory impairments (e.g., contrast sensitivity or achromatopsia) also have been reported but are believed more reflective of cortical disturbances than of AD-associated optic neuropathy.

  5. Semantic Wavelet-Induced Frequency-Tagging (SWIFT Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.

    Directory of Open Access Journals (Sweden)

    Roger Koenig-Robert

    Full Text Available Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging, a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI, we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.

  6. Action word Related to Walk Heard by the Ears Activates Visual Cortex and Superior Temporal Gyrus: An fMRI Study

    Directory of Open Access Journals (Sweden)

    Naoyuki Osaka

    2012-10-01

    Full Text Available Cognitive neuroscience of language of action processing is one of the interesting issues on the cortical “seat” of word meaning and related action (Pulvermueller, 1999 Behavioral Brain Sciences 22 253–336. For example, generation of action verbs referring to various arm or leg actions (e.g., pick or kick differentially activate areas along the motor strip that overlap with those areas activated by actual movement of the fingers or feet (Hauk et al., 2004 Neuron 41 301–307. Meanwhile, mimic words like onomatopoeia have the other potential to selectively and strongly stimulate specific brain regions having a specified “seat” of action meaning. In fact, mimic words highly suggestive of laughter and gaze significantly activated the extrastriate visual /premotor cortices and the frontal eye field, respectively (Osaka et al., 2003 Neuroscience Letters 340 127–130; 2009 Neuroscience Letters 461 65–68. However, the role of a mimic word related to walk on specific brain regions has not yet been investigated. The present study showed that a mimic word highly suggestive of human walking, heard by the ears with eyes closed, significantly activated the visual cortex located in extrastriate cortex and superior temporal gyrus while hearing non-sense words that did not imply walk under the same task did not activate these areas. These areas would be a critical region for generating visual images of walking and related action.

  7. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  8. The Temporal Pole Top-Down Modulates the Ventral Visual Stream During Social Cognition.

    Science.gov (United States)

    Pehrs, Corinna; Zaki, Jamil; Schlochtermeier, Lorna H; Jacobs, Arthur M; Kuchinke, Lars; Koelsch, Stefan

    2017-01-01

    The temporal pole (TP) has been associated with diverse functions of social cognition and emotion processing. Although the underlying mechanism remains elusive, one possibility is that TP acts as domain-general hub integrating socioemotional information. To test this, 26 participants were presented with 60 empathy-evoking film clips during fMRI scanning. The film clips were preceded by a linguistic sad or neutral context and half of the clips were accompanied by sad music. In line with its hypothesized role, TP was involved in the processing of sad context and furthermore tracked participants' empathic concern. To examine the neuromodulatory impact of TP, we applied nonlinear dynamic causal modeling to a multisensory integration network from previous work consisting of superior temporal gyrus (STG), fusiform gyrus (FG), and amygdala, which was extended by an additional node in the TP. Bayesian model comparison revealed a gating of STG and TP on fusiform-amygdalar coupling and an increase of TP to FG connectivity during the integration of contextual information. Moreover, these backward projections were strengthened by emotional music. The findings indicate that during social cognition, TP integrates information from different modalities and top-down modulates lower-level perceptual areas in the ventral visual stream as a function of integration demands. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Visual Temporal Processing in Dyslexia and the Magnocellular Deficit Theory: The Need for Speed?

    Science.gov (United States)

    McLean, Gregor M. T.; Stuart, Geoffrey W.; Coltheart, Veronika; Castles, Anne

    2011-01-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore "temporal" aspects of magnocellular functioning…

  10. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  11. Flow Visualization with Quantified Spatial and Temporal Errors Using Edge Maps

    KAUST Repository

    Bhatia, H.; Jadhav, S.; Bremer, P.; Guoning Chen,; Levine, J. A.; Nonato, L. G.; Pascucci, V.

    2012-01-01

    Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures. © 2012 IEEE.

  12. Flow Visualization with Quantified Spatial and Temporal Errors Using Edge Maps

    KAUST Repository

    Bhatia, H.

    2012-09-01

    Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures. © 2012 IEEE.

  13. Large-Area, High-Resolution Tree Cover Mapping with Multi-Temporal SPOT5 Imagery, New South Wales, Australia

    Directory of Open Access Journals (Sweden)

    Adrian Fisher

    2016-06-01

    Full Text Available Tree cover maps are used for many purposes, such as vegetation mapping, habitat connectivity and fragmentation studies. Small remnant patches of native vegetation are recognised as ecologically important, yet they are underestimated in remote sensing products derived from Landsat. High spatial resolution sensors are capable of mapping small patches of trees, but their use in large-area mapping has been limited. In this study, multi-temporal Satellite pour l’Observation de la Terre 5 (SPOT5 High Resolution Geometrical data was pan-sharpened to 5 m resolution and used to map tree cover for the Australian state of New South Wales (NSW, an area of over 800,000 km2. Complete coverages of SPOT5 panchromatic and multispectral data over NSW were acquired during four consecutive summers (2008–2011 for a total of 1256 images. After pre-processing, the imagery was used to model foliage projective cover (FPC, a measure of tree canopy density commonly used in Australia. The multi-temporal imagery, FPC models and 26,579 training pixels were used in a binomial logistic regression model to estimate the probability of each pixel containing trees. The probability images were classified into a binary map of tree cover using local thresholds, and then visually edited to reduce errors. The final tree map was then attributed with the mean FPC value from the multi-temporal imagery. Validation of the binary map based on visually assessed high resolution reference imagery revealed an overall accuracy of 88% (±0.51% standard error, while comparison against airborne lidar derived data also resulted in an overall accuracy of 88%. A preliminary assessment of the FPC map by comparing against 76 field measurements showed a very good agreement (r2 = 0.90 with a root mean square error of 8.57%, although this may not be representative due to the opportunistic sampling design. The map represents a regionally consistent and locally relevant record of tree cover for NSW, and

  14. A Ventral Visual Stream Reading Center Independent of Sensory Modality and Visual Experience

    Directory of Open Access Journals (Sweden)

    Lior Reich

    2011-10-01

    Full Text Available The Visual Word Form Area (VWFA is a ventral-temporal-visual area that develops expertise for visual reading. It encodes letter-strings irrespective of case, font, or location in the visual-field, with striking anatomical reproducibility across individuals. In the blind, reading can be achieved using Braille, with a comparable level-of-expertise to that of sighted readers. We investigated which area plays the role of the VWFA in the blind. One would expect it to be at either parietal or bilateral occipital cortex, reflecting the tactile nature of the task and crossmodal plasticity, respectively. However, according to the notion that brain areas are task specific rather than sensory-modality specific, we predicted recruitment of the left-hemispheric VWFA, identically to the sighted and independent of visual experience. Using fMRI we showed that activation during Braille reading in congenitally blind individuals peaked in the VWFA, with striking anatomical consistency within and between blind and sighted. The VWFA was reading-selective when contrasted to high-level language and low-level sensory controls. Further preliminary results show that the VWFA is selectively activated also when people learn to read in a new language or using a different modality. Thus, the VWFA is a mutlisensory area specialized for reading regardless of visual experience.

  15. [Associative Learning between Orientation and Color in Early Visual Areas].

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2017-08-01

    Associative learning is an essential neural phenomenon where the contingency of different items increases after training. Although associative learning has been found to occur in many brain regions, there is no clear evidence that associative learning of visual features occurs in early visual areas. Here, we developed an associative decoded functional magnetic resonance imaging (fMRI) neurofeedback (A-DecNef) to determine whether associative learning of color and orientation can be induced in early visual areas. During the three days' training, A-DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was simultaneously, physically presented to participants. Consequently, participants' perception of "red" was significantly more frequently than that of "green" in an achromatic vertical grating. This effect was also observed 3 to 5 months after training. These results suggest that long-term associative learning of two different visual features such as color and orientation, was induced most likely in early visual areas. This newly extended technique that induces associative learning may be used as an important tool for understanding and modifying brain function, since associations are fundamental and ubiquitous with respect to brain function.

  16. Flicker sensitivity as a function of target area with and without temporal noise.

    Science.gov (United States)

    Rovamo, J; Donner, K; Näsänen, R; Raninen, A

    2000-01-01

    Flicker sensitivities (1-30 Hz) in foveal, photopic vision were measured as functions of stimulus area with and without strong external white temporal noise. Stimuli were circular, sinusoidally flickering sharp-edged spots of variable diameters (0.25-4 degrees ) but constant duration (2 s), surrounded by a uniform equiluminant field. The data was described with a model comprising (i) low-pass filtering in the retina (R), with a modulation transfer function (MTF) of a form derived from responses of cones; (ii) normalisation of the temporal luminance distribution by the average luminance; (iii) high-pass filtering by postreceptoral neural pathways (P), with an MTF proportional to temporal frequency; (iv) addition of internal white neural noise (N(i)); (v) integration over a spatial window; and (vi) detection by a suboptimal temporal matched filter of efficiency eta. In strong external noise, flicker sensitivity was independent of spot area. Without external noise, sensitivity increased with the square root of stimulus area (Piper's law) up to a critical area (A(c)), where it reaches a maximum level (S(max)). Both A(c) and eta were monotonic functions of temporal frequency (f), such that log A(c) increased and log eta decreased linearly with log f. Remarkably, the increase in spatial integration area and the decrease in efficiency were just balanced, so A(c)(f)eta(f) was invariant against f. Thus the bandpass characteristics of S(max)(f) directly reflected the composite effect of the distal filters R(f) and P(f). The temporal equivalent (N(it)) of internal neural noise (N(i)) decreased in inverse proportion to spot area up to A(c) and then stayed constant indicating that spatially homogeneous signals and noise are integrated over the same area.

  17. Temporal attention for visual food stimuli in restrained eaters.

    Science.gov (United States)

    Neimeijer, Renate A M; de Jong, Peter J; Roefs, Anne

    2013-05-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require less attentional resources to enter awareness. Once a food stimulus has captured attention, it may be preferentially processed and granted prioritized access to limited cognitive resources. This might help explain why restrained eaters often fail in their attempts to restrict their food intake. A Rapid Serial Visual Presentation task consisting of dual and single target trials with food and neutral pictures as targets and/or distractors was administered to restrained (n=40) and unrestrained (n=40) eaters to study temporal attentional bias. Results indicated that (1) food cues did not diminish the attentional blink in restrained eaters when presented as second target; (2) specifically restrained eaters showed an interference effect of identifying food targets on the identification of preceding neutral targets; (3) for both restrained and unrestrained eaters, food cues enhanced the attentional blink; (4) specifically in restrained eaters, food distractors elicited an attention blink in the single target trials. In restrained eaters, food cues get prioritized access to limited cognitive resources, even if this processing priority interferes with their current goals. This temporal attentional bias for food stimuli might help explain why restrained eaters typically have difficulties maintaining their diet rules. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.

    Science.gov (United States)

    Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J

    2017-02-01

    Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.

  19. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    Directory of Open Access Journals (Sweden)

    Jonathan M P Wilbiks

    Full Text Available Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations are combined with the temporal unpredictability of the critical frame (Experiment 2, or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4. Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  1. Visual areas become less engaged in associative recall following memory stabilization.

    Science.gov (United States)

    Nieuwenhuis, Ingrid L C; Takashima, Atsuko; Oostenveld, Robert; Fernández, Guillén; Jensen, Ole

    2008-04-15

    Numerous studies have focused on changes in the activity in the hippocampus and higher association areas with consolidation and memory stabilization. Even though perceptual areas are engaged in memory recall, little is known about how memory stabilization is reflected in those areas. Using magnetoencephalography (MEG) we investigated changes in visual areas with memory stabilization. Subjects were trained on associating a face to one of eight locations. The first set of associations ('stabilized') was learned in three sessions distributed over a week. The second set ('labile') was learned in one session just prior to the MEG measurement. In the recall session only the face was presented and subjects had to indicate the correct location using a joystick. The MEG data revealed robust gamma activity during recall, which started in early visual cortex and propagated to higher visual and parietal brain areas. The occipital gamma power was higher for the labile than the stabilized condition (time=0.65-0.9 s). Also the event-related field strength was higher during recall of labile than stabilized associations (time=0.59-1.5 s). We propose that recall of the spatial associations prior to memory stabilization involves a top-down process relying on reconstructing learned representations in visual areas. This process is reflected in gamma band activity consistent with the notion that neuronal synchronization in the gamma band is required for visual representations. More direct synaptic connections are formed with memory stabilization, thus decreasing the dependence on visual areas.

  2. Objective assessment of chromatic and achromatic pattern adaptation reveals the temporal response properties of different visual pathways.

    Science.gov (United States)

    Robson, Anthony G; Kulikowski, Janus J

    2012-11-01

    The aim was to investigate the temporal response properties of magnocellular, parvocellular, and koniocellular visual pathways using increment/decrement changes in contrast to elicit visual evoked potentials (VEPs). Static achromatic and isoluminant chromatic gratings were generated on a monitor. Chromatic gratings were modulated along red/green (R/G) or subject-specific tritanopic confusion axes, established using a minimum distinct border criterion. Isoluminance was determined using minimum flicker photometry. Achromatic and chromatic VEPs were recorded to contrast increments and decrements of 0.1 or 0.2 superimposed on the static gratings (masking contrast 0-0.6). Achromatic increment/decrement changes in contrast evoked a percept of apparent motion when the spatial frequency was low; VEPs to such stimuli were positive in polarity and largely unaffected by high levels of static contrast, consistent with transient response mechanisms. VEPs to finer achromatic gratings showed marked attenuation as static contrast was increased. Chromatic VEPs to R/G or tritan chromatic contrast increments were of negative polarity and showed progressive attenuation as static contrast was increased, in keeping with increasing desensitization of the sustained responses of the color-opponent visual pathways. Chromatic contrast decrement VEPs were of positive polarity and less sensitive to pattern adaptation. The relative contribution of sustained/transient mechanisms to achromatic processing is spatial frequency dependent. Chromatic contrast increment VEPs reflect the sustained temporal response properties of parvocellular and koniocellular pathways. Cortical VEPs can provide an objective measure of pattern adaptation and can be used to probe the temporal response characteristics of different visual pathways.

  3. Temporal subtraction of dual-energy chest radiographs

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Doshi, Devang J.; Engelmann, Roger; Caligiuri, Philip; MacMahon, Heber

    2006-01-01

    Temporal subtraction and dual-energy imaging are two enhanced radiography techniques that are receiving increased attention in chest radiography. Temporal subtraction is an image processing technique that facilitates the visualization of pathologic change across serial chest radiographic images acquired from the same patient; dual-energy imaging exploits the differential relative attenuation of x-ray photons exhibited by soft-tissue and bony structures at different x-ray energies to generate a pair of images that accentuate those structures. Although temporal subtraction images provide a powerful mechanism for enhancing visualization of subtle change, misregistration artifacts in these images can mimic or obscure abnormalities. The purpose of this study was to evaluate whether dual-energy imaging could improve the quality of temporal subtraction images. Temporal subtraction images were generated from 100 pairs of temporally sequential standard radiographic chest images and from the corresponding 100 pairs of dual-energy, soft-tissue radiographic images. The registration accuracy demonstrated in the resulting temporal subtraction images was evaluated subjectively by two radiologists. The registration accuracy of the soft-tissue-based temporal subtraction images was rated superior to that of the conventional temporal subtraction images. Registration accuracy also was evaluated objectively through an automated method, which achieved an area-under-the-ROC-curve value of 0.92 in the distinction between temporal subtraction images that demonstrated clinically acceptable and clinically unacceptable registration accuracy. By combining dual-energy soft-tissue images with temporal subtraction, misregistration artifacts can be reduced and superior image quality can be obtained

  4. The Effect of Visual, Spatial and Temporal Manipulations on Embodiment and Action

    Science.gov (United States)

    Ratcliffe, Natasha; Newport, Roger

    2017-01-01

    The feeling of owning and controlling the body relies on the integration and interpretation of sensory input from multiple sources with respect to existing representations of the bodily self. Illusion paradigms involving multisensory manipulations have demonstrated that while the senses of ownership and agency are strongly related, these two components of bodily experience may be dissociable and differentially affected by alterations to sensory input. Importantly, however, much of the current literature has focused on the application of sensory manipulations to external objects or virtual representations of the self that are visually incongruent with the viewer’s own body and which are not part of the existing body representation. The current experiment used MIRAGE-mediated reality to investigate how manipulating the visual, spatial and temporal properties of the participant’s own hand (as opposed to a fake/virtual limb) affected embodiment and action. Participants viewed two representations of their right hand inside a MIRAGE multisensory illusions box with opposing visual (normal or grossly distorted), temporal (synchronous or asynchronous) and spatial (precise real location or false location) manipulations applied to each hand. Subjective experiences of ownership and agency towards each hand were measured alongside an objective measure of perceived hand location using a pointing task. The subjective sense of agency was always anchored to the synchronous hand, regardless of physical appearance and location. Subjective ownership also moved with the synchronous hand, except when both the location and appearance of the synchronous limb were incongruent with that of the real limb. Objective pointing measures displayed a similar pattern, however movement synchrony was not sufficient to drive a complete shift in perceived hand location, indicating a greater reliance on the spatial location of the real hand. The results suggest that while the congruence of self

  5. The Effect of Visual, Spatial and Temporal Manipulations on Embodiment and Action

    Directory of Open Access Journals (Sweden)

    Natasha Ratcliffe

    2017-05-01

    Full Text Available The feeling of owning and controlling the body relies on the integration and interpretation of sensory input from multiple sources with respect to existing representations of the bodily self. Illusion paradigms involving multisensory manipulations have demonstrated that while the senses of ownership and agency are strongly related, these two components of bodily experience may be dissociable and differentially affected by alterations to sensory input. Importantly, however, much of the current literature has focused on the application of sensory manipulations to external objects or virtual representations of the self that are visually incongruent with the viewer’s own body and which are not part of the existing body representation. The current experiment used MIRAGE-mediated reality to investigate how manipulating the visual, spatial and temporal properties of the participant’s own hand (as opposed to a fake/virtual limb affected embodiment and action. Participants viewed two representations of their right hand inside a MIRAGE multisensory illusions box with opposing visual (normal or grossly distorted, temporal (synchronous or asynchronous and spatial (precise real location or false location manipulations applied to each hand. Subjective experiences of ownership and agency towards each hand were measured alongside an objective measure of perceived hand location using a pointing task. The subjective sense of agency was always anchored to the synchronous hand, regardless of physical appearance and location. Subjective ownership also moved with the synchronous hand, except when both the location and appearance of the synchronous limb were incongruent with that of the real limb. Objective pointing measures displayed a similar pattern, however movement synchrony was not sufficient to drive a complete shift in perceived hand location, indicating a greater reliance on the spatial location of the real hand. The results suggest that while the

  6. Time-based forgetting in visual working memory reflects temporal distinctiveness, not decay

    OpenAIRE

    Souza Alessandra S.; Oberauer Klaus

    2015-01-01

    Is forgetting from working memory (WM) better explained by decay or interference? The answer to this question is the topic of an ongoing debate. Recently a number of studies showed that performance in tests of visual WM declines with an increasing unfilled retention interval. This finding was interpreted as revealing decay. Alternatively it can be explained by interference theories as an effect of temporal distinctiveness. According to decay theories forgetting depends on the absolute time el...

  7. Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2018-01-01

    , exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data......PAIN)' database, for RGBDT pain level recognition in sequences. We provide a first baseline results including 5 pain levels recognition by analyzing independent visual modalities and their fusion with CNN and LSTM models. From the experimental evaluation we observe that fusion of modalities helps to enhance...... recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate....

  8. Visual advantage in deaf adults linked to retinal changes.

    Directory of Open Access Journals (Sweden)

    Charlotte Codina

    Full Text Available The altered sensory experience of profound early onset deafness provokes sometimes large scale neural reorganisations. In particular, auditory-visual cross-modal plasticity occurs, wherein redundant auditory cortex becomes recruited to vision. However, the effect of human deafness on neural structures involved in visual processing prior to the visual cortex has never been investigated, either in humans or animals. We investigated neural changes at the retina and optic nerve head in profoundly deaf (N = 14 and hearing (N = 15 adults using Optical Coherence Tomography (OCT, an in-vivo light interference method of quantifying retinal micro-structure. We compared retinal changes with behavioural results from the same deaf and hearing adults, measuring sensitivity in the peripheral visual field using Goldmann perimetry. Deaf adults had significantly larger neural rim areas, within the optic nerve head in comparison to hearing controls suggesting greater retinal ganglion cell number. Deaf adults also demonstrated significantly larger visual field areas (indicating greater peripheral sensitivity than controls. Furthermore, neural rim area was significantly correlated with visual field area in both deaf and hearing adults. Deaf adults also showed a significantly different pattern of retinal nerve fibre layer (RNFL distribution compared to controls. Significant correlations between the depth of the RNFL at the inferior-nasal peripapillary retina and the corresponding far temporal and superior temporal visual field areas (sensitivity were found. Our results show that cross-modal plasticity after early onset deafness may not be limited to the sensory cortices, noting specific retinal adaptations in early onset deaf adults which are significantly correlated with peripheral vision sensitivity.

  9. Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions.

    Science.gov (United States)

    van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean

    2017-04-15

    A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  11. Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization.

    Science.gov (United States)

    Bladin, Karl; Axelsson, Emil; Broberg, Erik; Emmart, Carter; Ljung, Patric; Bock, Alexander; Ynnerman, Anders

    2017-08-29

    Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes, such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters, interactive touch tables, and virtual reality headsets.

  12. Visual aesthetics study: Gibson Dome area, Paradox Basin, Utah

    International Nuclear Information System (INIS)

    1984-03-01

    The Visual Aesthetics study was performed as an initial assessment of concerns regarding impacts to visual resources that might be associated with the construction of a geologic nuclear waste repository and associated rail routes in the Gibson Dome location of southeastern Utah. Potential impacts to visual resources were evaluated by predicting visibility of the facility and railway routes using the US Forest Service (USFS) computer program, VIEWIT, and by applying the Bureau of Land Management (BLM) Visual Resource Management (VRM) methodology. Five proposed facility sites in the Gibson Dome area and three proposed railway routes were evaluated for visual impact. 10 references, 19 figures, 5 tables

  13. Masked immediate-repetition-priming effect on the early lexical process in the bilateral anterior temporal areas assessed by neuromagnetic responses.

    Science.gov (United States)

    Fujimaki, Norio; Hayakawa, Tomoe; Ihara, Aya; Matani, Ayumu; Wei, Qiang; Terazono, Yasushi; Murata, Tsutomu

    2010-10-01

    A masked priming paradigm has been used to measure unconscious and automatic context effects on the processing of words. However, its spatiotemporal neural basis has not yet been clarified. To test the hypothesis that masked repetition priming causes enhancement of neural activation, we conducted a magnetoencephalography experiment in which a prime was visually presented for a short duration (50 ms), preceded by a mask pattern, and followed by a target word that was represented by a Japanese katakana syllabogram. The prime, which was identical to the target, was represented by another hiragana syllabogram in the "Repeated" condition, whereas it was a string of unreadable pseudocharacters in the "Unrepeated" condition. Subjects executed a categorical decision task on the target. Activation was significantly larger for the Repeated condition than for the Unrepeated condition at a time window of 150-250 ms in the right occipital area, 200-250 ms in the bilateral ventral occipitotemporal areas, and 200-250 ms and 200-300 ms in the left and right anterior temporal areas, respectively. These areas have been reported to be related to processing of visual-form/orthography and lexico-semantics, and the enhanced activation supports the hypothesis. However, the absence of the priming effect in the areas related to phonological processing implies that automatic phonological priming effect depends on task requirements. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  14. Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2018-01-01

    , exploiting both spatial and temporal information of the face to assess pain level, and second, incorporating multiple visual modalities to capture complementary face information related to pain. Most works in the literature focus on merely exploiting spatial information on chromatic (RGB) video data...... recognition performance of pain levels in comparison to isolated ones. In particular, the combination of RGB, D, and T in an early fusion fashion achieved the best recognition rate....

  15. Peripheral Visual Cues: Their Fate in Processing and Effects on Attention and Temporal-Order Perception.

    Science.gov (United States)

    Tünnermann, Jan; Scharlau, Ingrid

    2016-01-01

    Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.

  16. Auditory motion in the sighted and blind: Early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions.

    Science.gov (United States)

    Dormal, Giulia; Rezk, Mohamed; Yakobov, Esther; Lepore, Franco; Collignon, Olivier

    2016-07-01

    How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Three-dimensional visualization of geographical terrain data using temporal parallax difference induction

    Science.gov (United States)

    Mayhew, Christopher A.; Mayhew, Craig M.

    2009-02-01

    Vision III Imaging, Inc. (the Company) has developed Parallax Image Display (PIDTM) software tools to critically align and display aerial images with parallax differences. Terrain features are rendered obvious to the viewer when critically aligned images are presented alternately at 4.3 Hz. The recent inclusion of digital elevation models in geographic data browsers now allows true three-dimensional parallax to be acquired from virtual globe programs like Google Earth. The authors have successfully developed PID methods and code that allow three-dimensional geographical terrain data to be visualized using temporal parallax differences.

  18. Modality and Perceptual-Motor Experience Influence the Detection of Temporal Deviations in Tap Dance Sequences

    Directory of Open Access Journals (Sweden)

    Mauro Murgia

    2017-08-01

    Full Text Available Accurate temporal information processing is critically important in many motor activities within disciplines such as dance, music, and sport. However, it is still unclear how temporal information related to biological motion is processed by expert and non-expert performers. It is well-known that the auditory modality dominates the visual modality in processing temporal information of simple stimuli, and that experts outperform non-experts in biological motion perception. In the present study, we combined these two areas of research; we investigated how experts and non-experts detected temporal deviations in tap dance sequences, in the auditory modality compared to the visual modality. We found that temporal deviations were better detected in the auditory modality compared to the visual modality, and by experts compared to non-experts. However, post hoc analyses indicated that these effects were mainly due to performances obtained by experts in the auditory modality. The results suggest that the experience advantage is not equally distributed across the modalities, and that tap dance experience enhances the effectiveness of the auditory modality but not the visual modality when processing temporal information. The present results and their potential implications are discussed in both temporal information processing and biological motion perception frameworks.

  19. Task-modulated activation and functional connectivity of the temporal and frontal areas during speech comprehension.

    Science.gov (United States)

    Yue, Q; Zhang, L; Xu, G; Shu, H; Li, P

    2013-05-01

    There is general consensus in the literature that a distributed network of temporal and frontal brain areas is involved in speech comprehension. However, how active versus passive tasks modulate the activation and the functional connectivity of the critical brain areas is not clearly understood. In this study, we used functional magnetic resonance imaging (fMRI) to identify intelligibility and task-related effects in speech comprehension. Participants performed a semantic judgment task on normal and time-reversed sentences, or passively listened to the sentences without making an overt response. The subtraction analysis demonstrated that passive sentence comprehension mainly engaged brain areas in the left anterior and posterior superior temporal sulcus and middle temporal gyrus (aSTS/MTG and pSTS/MTG), whereas active sentence comprehension recruited bilateral frontal regions in addition to the aSTS/MTG and pSTS/MTG regions. Functional connectivity analysis revealed that during passive sentence comprehension, the left aSTS/MTG was functionally connected with the left Heschl's gyrus (HG) and bilateral superior temporal gyrus (STG) but no area was functionally connected with the left pSTS/MTG; during active sentence comprehension, however, both the left aSTS/MTG and pSTS/MTG were functionally connected with bilateral superior temporal and inferior frontal areas. While these results are consistent with the view that the ventral stream of the temporo-frontal network subserves semantic processing, our findings further indicate that both the activation and the functional connectivity of the temporal and frontal areas are modulated by task demands. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Spatio-Temporal Modelling of Dust Transport over Surface Mining Areas and Neighbouring Residential Zones

    Directory of Open Access Journals (Sweden)

    Eva Gulikova

    2008-06-01

    Full Text Available Projects focusing on spatio-temporal modelling of the living environment need to manage a wide range of terrain measurements, existing spatial data, time series, results of spatial analysis and inputs/outputs from numerical simulations. Thus, GISs are often used to manage data from remote sensors, to provide advanced spatial analysis and to integrate numerical models. In order to demonstrate the integration of spatial data, time series and methods in the framework of the GIS, we present a case study focused on the modelling of dust transport over a surface coal mining area, exploring spatial data from 3D laser scanners, GPS measurements, aerial images, time series of meteorological observations, inputs/outputs form numerical models and existing geographic resources. To achieve this, digital terrain models, layers including GPS thematic mapping, and scenes with simulation of wind flows are created to visualize and interpret coal dust transport over the mine area and a neighbouring residential zone. A temporary coal storage and sorting site, located near the residential zone, is one of the dominant sources of emissions. Using numerical simulations, the possible effects of wind flows are observed over the surface, modified by natural objects and man-made obstacles. The coal dust drifts with the wind in the direction of the residential zone and is partially deposited in this area. The simultaneous display of the digital map layers together with the location of the dominant emission source, wind flows and protected areas enables a risk assessment of the dust deposition in the area of interest to be performed. In order to obtain a more accurate simulation of wind flows over the temporary storage and sorting site, 3D laser scanning and GPS thematic mapping are used to create a more detailed digital terrain model. Thus, visualization of wind flows over the area of interest combined with 3D map layers enables the exploration of the processes of coal dust

  1. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  2. Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study

    Science.gov (United States)

    Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.

    2012-01-01

    Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014

  3. Visual Analytics for the Food-Water-Energy Nexus in the Phoenix Active Management Area

    Science.gov (United States)

    Maciejewski, R.; Mascaro, G.; White, D. D.; Ruddell, B. L.; Aggarwal, R.; Sarjoughian, H.

    2016-12-01

    The Phoenix Active Management Area (AMA) is an administrative region of 14,500 km2 identified by the Arizona Department of Water Resources with the aim of reaching and maintaining the safe yield (i.e. balance between annual amount of groundwater withdrawn and recharged) by 2025. The AMA includes the Phoenix metropolitan area, which has experienced a dramatic population growth over the last decades with a progressive conversion of agricultural land into residential land. As a result of these changes, the water and energy demand as well as the food production in the region have significantly evolved over the last 30 years. Given the arid climate, a crucial role to support this growth has been the creation of a complex water supply system based on renewable and non-renewable resources, including the energy-intensive Central Arizona Project. In this talk, we present a preliminary characterization of the evolution in time of the feedbacks between food, water, and energy in the Phoenix AMA by analyzing secondary data (available from water and energy providers, irrigation districts, and municipalities), as well as satellite imagery and primary data collected by the authors. A preliminary visual analytics framework is also discussed describing current design practices and ideas for exploring networked components and cascading impacts within the FEW Nexus. This analysis and framework represent the first steps towards the development of an integrated modeling, visualization, and decision support infrastructure for comprehensive FEW systems decision making at decision-relevant temporal and spatial scales.

  4. Learning of Temporal and Spatial Movement Aspects: A Comparison of Four Types of Haptic Control and Concurrent Visual Feedback.

    Science.gov (United States)

    Rauter, Georg; Sigrist, Roland; Riener, Robert; Wolf, Peter

    2015-01-01

    In literature, the effectiveness of haptics for motor learning is controversially discussed. Haptics is believed to be effective for motor learning in general; however, different types of haptic control enhance different movement aspects. Thus, in dependence on the movement aspects of interest, one type of haptic control may be effective whereas another one is not. Therefore, in the current work, it was investigated if and how different types of haptic controllers affect learning of spatial and temporal movement aspects. In particular, haptic controllers that enforce active participation of the participants were expected to improve spatial aspects. Only haptic controllers that provide feedback about the task's velocity profile were expected to improve temporal aspects. In a study on learning a complex trunk-arm rowing task, the effect of training with four different types of haptic control was investigated: position control, path control, adaptive path control, and reactive path control. A fifth group (control) trained with visual concurrent augmented feedback. As hypothesized, the position controller was most effective for learning of temporal movement aspects, while the path controller was most effective in teaching spatial movement aspects of the rowing task. Visual feedback was also effective for learning temporal and spatial movement aspects.

  5. Visual memory-deficit amnesia: A distinct amnesic presentation and etiology

    OpenAIRE

    Rubin, David C.; Greenberg, Daniel L.

    1998-01-01

    We describe a form of amnesia, which we have called visual memory-deficit amnesia, that is caused by damage to areas of the visual system that store visual information. Because it is caused by a deficit in access to stored visual material and not by an impaired ability to encode or retrieve new material, it has the otherwise infrequent properties of a more severe retrograde than anterograde amnesia with no temporal gradient in the retrograde amnesia. Of the 11 cases of long-term visual memory...

  6. Structural asymmetry of cortical visual areas is related to ocular dominance

    DEFF Research Database (Denmark)

    Jensen, Bettina H; Hougaard, Anders; Amin, Faisal M

    2015-01-01

    lateralized visual areas were identified, both right>left and left>right. When correlating the asymmetries to the functional parameters, we found a significant correlation to ocular dominance (P...The grey matter of the human brain is asymmetrically distributed between the cerebral hemispheres. This asymmetry includes visual areas, but its relevance to visual function is not understood. Voxel-based morphometry is a well-established technique for localization and quantification of cerebral...... was identified to be significantly larger in the left hemisphere for right-eyed participants and vice versa. These results suggest a cerebral basis for ocular dominance....

  7. Application of GIS and Visualization Technology in the Regional-Scale Ground-Water Modeling of the Twentynine Palms and San Jose Areas, California

    Science.gov (United States)

    Li, Z.

    2003-12-01

    Application of GIS and visualization technology significantly contributes to the efficiency and success of developing ground-water models in the Twentynine Palms and San Jose areas, California. Visualizations from GIS and other tools can help to formulate the conceptual model by quickly revealing the basinwide geohydrologic characteristics and changes of a ground-water flow system, and by identifying the most influential components of system dynamics. In addition, 3-D visualizations and animations can help validate the conceptual formulation and the numerical calibration of the model by checking for model-input data errors, revealing cause and effect relationships, and identifying hidden design flaws in model layering and other critical flow components. Two case studies will be presented: The first is a desert basin (near the town of Twentynine Palms) characterized by a fault-controlled ground-water flow system. The second is a coastal basin (Santa Clara Valley including the city of San Jose) characterized by complex, temporally variable flow components ­¦ including artificial recharge through a large system of ponds and stream channels, dynamically changing inter-layer flow from hundreds of multi-aquifer wells, pumping-driven subsidence and recovery, and climatically variable natural recharge. For the Twentynine Palms area, more than 10,000 historical ground-water level and water-quality measurements were retrieved from the USGS databases. The combined use of GIS and visualization tools allowed these data to be swiftly organized and interpreted, and depicted by water-level and water-quality maps with a variety of themes for different uses. Overlaying and cross-correlating these maps with other hydrological, geological, geophysical, and geochemical data not only helped to quickly identify the major geohydrologic characteristics controlling the natural variation of hydraulic head in space, such as faults, basin-bottom altitude, and aquifer stratigraphies, but also

  8. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Brain signal complexity rises with repetition suppression in visual learning.

    Science.gov (United States)

    Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah

    2016-06-21

    Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual

  10. A functional magnetic resonance imaging study mapping the episodic memory encoding network in temporal lobe epilepsy

    Science.gov (United States)

    Sidhu, Meneka K.; Stretton, Jason; Winston, Gavin P.; Bonelli, Silvia; Centeno, Maria; Vollmar, Christian; Symms, Mark; Thompson, Pamela J.; Koepp, Matthias J.

    2013-01-01

    Functional magnetic resonance imaging has demonstrated reorganization of memory encoding networks within the temporal lobe in temporal lobe epilepsy, but little is known of the extra-temporal networks in these patients. We investigated the temporal and extra-temporal reorganization of memory encoding networks in refractory temporal lobe epilepsy and the neural correlates of successful subsequent memory formation. We studied 44 patients with unilateral temporal lobe epilepsy and hippocampal sclerosis (24 left) and 26 healthy control subjects. All participants performed a functional magnetic resonance imaging memory encoding paradigm of faces and words with subsequent out-of-scanner recognition assessments. A blocked analysis was used to investigate activations during encoding and neural correlates of subsequent memory were investigated using an event-related analysis. Event-related activations were then correlated with out-of-scanner verbal and visual memory scores. During word encoding, control subjects activated the left prefrontal cortex and left hippocampus whereas patients with left hippocampal sclerosis showed significant additional right temporal and extra-temporal activations. Control subjects displayed subsequent verbal memory effects within left parahippocampal gyrus, left orbitofrontal cortex and fusiform gyrus whereas patients with left hippocampal sclerosis activated only right posterior hippocampus, parahippocampus and fusiform gyrus. Correlational analysis showed that patients with left hippocampal sclerosis with better verbal memory additionally activated left orbitofrontal cortex, anterior cingulate cortex and left posterior hippocampus. During face encoding, control subjects showed right lateralized prefrontal cortex and bilateral hippocampal activations. Patients with right hippocampal sclerosis showed increased temporal activations within the superior temporal gyri bilaterally and no increased extra-temporal areas of activation compared with

  11. Sex & vision I: Spatio-temporal resolution

    Directory of Open Access Journals (Sweden)

    Abramov Israel

    2012-09-01

    Full Text Available Abstract Background Cerebral cortex has a very large number of testosterone receptors, which could be a basis for sex differences in sensory functions. For example, audition has clear sex differences, which are related to serum testosterone levels. Of all major sensory systems only vision has not been examined for sex differences, which is surprising because occipital lobe (primary visual projection area may have the highest density of testosterone receptors in the cortex. We have examined a basic visual function: spatial and temporal pattern resolution and acuity. Methods We tested large groups of young adults with normal vision. They were screened with a battery of standard tests that examined acuity, color vision, and stereopsis. We sampled the visual system’s contrast-sensitivity function (CSF across the entire spatio-temporal space: 6 spatial frequencies at each of 5 temporal rates. Stimuli were gratings with sinusoidal luminance profiles generated on a special-purpose computer screen; their contrast was also sinusoidally modulated in time. We measured threshold contrasts using a criterion-free (forced-choice, adaptive psychophysical method (QUEST algorithm. Also, each individual’s acuity limit was estimated by fitting his or her data with a model and extrapolating to find the spatial frequency corresponding to 100% contrast. Results At a very low temporal rate, the spatial CSF was the canonical inverted-U; but for higher temporal rates, the maxima of the spatial CSFs shifted: Observers lost sensitivity at high spatial frequencies and gained sensitivity at low frequencies; also, all the maxima of the CSFs shifted by about the same amount in spatial frequency. Main effect: there was a significant (ANOVA sex difference. Across the entire spatio-temporal domain, males were more sensitive, especially at higher spatial frequencies; similarly males had significantly better acuity at all temporal rates. Conclusion As with other sensory systems

  12. Visual habit formation in monkeys with neurotoxic lesions of the ventrocaudal neostriatum

    Science.gov (United States)

    Fernandez-Ruiz, Juan; Wang, Jin; Aigner, Thomas G.; Mishkin, Mortimer

    2001-01-01

    Visual habit formation in monkeys, assessed by concurrent visual discrimination learning with 24-h intertrial intervals (ITI), was found earlier to be impaired by removal of the inferior temporal visual area (TE) but not by removal of either the medial temporal lobe or inferior prefrontal convexity, two of TE's major projection targets. To assess the role in this form of learning of another pair of structures to which TE projects, namely the rostral portion of the tail of the caudate nucleus and the overlying ventrocaudal putamen, we injected a neurotoxin into this neostriatal region of several monkeys and tested them on the 24-h ITI task as well as on a test of visual recognition memory. Compared with unoperated monkeys, the experimental animals were unaffected on the recognition test but showed an impairment on the 24-h ITI task that was highly correlated with the extent of their neostriatal damage. The findings suggest that TE and its projection areas in the ventrocaudal neostriatum form part of a circuit that selectively mediates visual habit formation. PMID:11274442

  13. Effects of temporal integration on the shape of visual backward masking functions.

    Science.gov (United States)

    Francis, Gregory; Cho, Yang Seok

    2008-10-01

    Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be monotonic increasing but other times is U-shaped. Theories of backward masking have long hypothesized that temporal integration of the target and mask influences properties of masking but have not connected the influence of integration with the shape of the masking function. With two experiments that vary the spatial properties of the target and mask, the authors provide evidence that temporal integration of the stimuli plays a critical role in determining the shape of the masking function. The resulting data both challenge current theories of backward masking and indicate what changes to the theories are needed to account for the new data. The authors further discuss the implication of the findings for uses of backward masking to explore other aspects of cognition.

  14. Temporal modulation visual fields, normal aging, Parkinson's disease and methyl-mercury in the James Bay Cree: a feasibility study

    Directory of Open Access Journals (Sweden)

    Jocelyn Faubert

    2003-01-01

    Full Text Available We assessed temporal modulation visual fields (TMFs for 91 observers including controls, Parkinson patients and members of the James Bay Cree community of Northern Québec suspected of being chronically exposed to relatively low levels of methyl-mercury. The main goal was to establish the feasibility of using such procedures to rapidly evaluate visual function in a large field study with the James Bay Cree community. The results show clear normal aging effects on TMFs and the pattern of loss differed depending on the flicker rates used. Group data comparisons between the controls and the experimental groups showed significant effects only between the Cree and normal controls in the 40 to 49 year-old age category for the low temporal frequency condition (2 Hz. Examples of individual analysis shows a Cree observer with severe visual field constriction at the 2 Hz condition with a normal visual field at the 16 Hz condition and a reverse pattern was demonstrated for a Parkinson's patient where a visual field constriction was evident only for the 16 Hz condition. The general conclusions are: Such a technique can be used to evaluate the visual consequences of neuropathological disorders and it may lead to dissociation between certain neurotoxic and neurodegenerative effects depending on the parameters used; this technique can be used for a large field study because it is rapid and easily understood and performed by the subjects; the TMF procedure used showed good test-retest correlations; normal aging causes changes in TMF profiles but the changes will show different patterns throughout the visual field depending on the parameters used.

  15. The role of temporal structure in human vision.

    Science.gov (United States)

    Blake, Randolph; Lee, Sang-Hun

    2005-03-01

    Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.

  16. Brain activity related to working memory for temporal order and object information.

    Science.gov (United States)

    Roberts, Brooke M; Libby, Laura A; Inhoff, Marika C; Ranganath, Charan

    2017-06-08

    Maintaining items in an appropriate sequence is important for many daily activities; however, remarkably little is known about the neural basis of human temporal working memory. Prior work suggests that the prefrontal cortex (PFC) and medial temporal lobe (MTL), including the hippocampus, play a role in representing information about temporal order. The involvement of these areas in successful temporal working memory, however, is less clear. Additionally, it is unknown whether regions in the PFC and MTL support temporal working memory across different timescales, or at coarse or fine levels of temporal detail. To address these questions, participants were scanned while completing 3 working memory task conditions (Group, Position and Item) that were matched in terms of difficulty and the number of items to be actively maintained. Group and Position trials probed temporal working memory processes, requiring the maintenance of hierarchically organized coarse and fine temporal information, respectively. To isolate activation related to temporal working memory, Group and Position trials were contrasted against Item trials, which required detailed working memory maintenance of visual objects. Results revealed that working memory encoding and maintenance of temporal information relative to visual information was associated with increased activation in dorsolateral PFC (DLPFC), and perirhinal cortex (PRC). In contrast, maintenance of visual details relative to temporal information was characterized by greater activation of parahippocampal cortex (PHC), medial and anterior PFC, and retrosplenial cortex. In the hippocampus, a dissociation along the longitudinal axis was observed such that the anterior hippocampus was more active for working memory encoding and maintenance of visual detail information relative to temporal information, whereas the posterior hippocampus displayed the opposite effect. Posterior parietal cortex was the only region to show sensitivity to temporal

  17. Functional magnetic resonance imaging by visual stimulation

    International Nuclear Information System (INIS)

    Nishimura, Yukiko; Negoro, Kiyoshi; Morimatsu, Mitsunori; Hashida, Masahiro

    1996-01-01

    We evaluated functional magnetic resonance images obtained in 8 healthy subjects in response to visual stimulation using a conventional clinical magnetic resonance imaging system with multi-slice spin-echo echo planar imaging. Activation in the visual cortex was clearly demonstrated by the multi-slice experiment with a task-related change in signal intensity. In addition to the primary visual cortex, other areas were also activated by a complicated visual task. Multi-slice spin-echo echo planar imaging offers high temporal resolution and allows the three-dimensional analysis of brain function. Functional magnetic resonance imaging provides a useful noninvasive method of mapping brain function. (author)

  18. Loss of nonphosphorylated neurofilament immunoreactivity in temporal cortical areas in Alzheimer's disease.

    Science.gov (United States)

    Thangavel, R; Sahu, S K; Van Hoesen, G W; Zaheer, A

    2009-05-05

    The distribution of immunoreactive neurons with nonphosphorylated neurofilament protein (SMI32) was studied in temporal cortical areas in normal subjects and in patients with Alzheimer's disease (AD). SMI32 immunopositive neurons were localized mainly in cortical layers II, III, V and VI, and were medium to large-sized pyramidal neurons. Patients with AD had prominent degeneration of SMI32 positive neurons in layers III and V of Brodmann areas 38, 36, 35 and 20; in layers II and IV of the entorhinal cortex (Brodmann area 28); and hippocampal neurons. Neurofibrillary tangles (NFTs) were stained with Thioflavin-S and with an antibody (AT8) against hyperphosphorylated tau. The NFT distribution was compared to that of the neuronal cytoskeletal marker SMI32 in these temporal cortical regions. The results showed that the loss of SMI32 immunoreactivity in temporal cortical regions of AD brain is paralleled by an increase in NFTs and AT8 immunoreactivity in neurons. The SMI32 immunoreactivity was drastically reduced in the cortical layers where tangle-bearing neurons are localized. A strong SMI32 immunoreactivity was observed in numerous neurons containing NFTs by double-immunolabeling with SMI32 and AT8. However, few neurons were labeled by AT8 and SMI32. These results suggest that the development of NFTs in some neurons results from some alteration in SMI32 expression, but does not account for all, particularly, early NFT-related changes. Also, there is a clear correlation of NFTs with selective population of pyramidal neurons in the temporal cortical areas and these pyramidal cells are specifically prone to formation of paired helical filaments. Furthermore, these pyramidal neurons might represent a significant portion of the neurons of origin of long corticocortical connection, and consequently contribute to the destruction of memory-related input to the hippocampal formation.

  19. Parametric fMRI analysis of visual encoding in the human medial temporal lobe.

    Science.gov (United States)

    Rombouts, S A; Scheltens, P; Machielson, W C; Barkhof, F; Hoogenraad, F G; Veltman, D J; Valk, J; Witter, M P

    1999-01-01

    A number of functional brain imaging studies indicate that the medial temporal lobe system is crucially involved in encoding new information into memory. However, most studies were based on differences in brain activity between encoding of familiar vs. novel stimuli. To further study the underlying cognitive processes, we applied a parametric design of encoding. Seven healthy subjects were instructed to encode complex color pictures into memory. Stimuli were presented in a parametric fashion at different rates, thus representing different loads of encoding. Functional magnetic resonance imaging (fMRI) was used to assess changes in brain activation. To determine the number of pictures successfully stored into memory, recognition scores were determined afterwards. During encoding, brain activation occurred in the medial temporal lobe, comparable to the results obtained by others. Increasing the encoding load resulted in an increase in the number of successfully stored items. This was reflected in a significant increase in brain activation in the left lingual gyrus, in the left and right parahippocampal gyrus, and in the right inferior frontal gyrus. This study shows that fMRI can detect changes in brain activation during variation of one aspect of higher cognitive tasks. Further, it strongly supports the notion that the human medial temporal lobe is involved in encoding novel visual information into memory.

  20. The Orientation of Visual Space from the Perspective of Hummingbirds

    Directory of Open Access Journals (Sweden)

    Luke P. Tyrrell

    2018-01-01

    Full Text Available Vision is a key component of hummingbird behavior. Hummingbirds hover in front of flowers, guide their bills into them for foraging, and maneuver backwards to undock from them. Capturing insects is also an important foraging strategy for most hummingbirds. However, little is known about the visual sensory specializations hummingbirds use to guide these two foraging strategies. We characterized the hummingbird visual field configuration, degree of eye movement, and orientation of the centers of acute vision. Hummingbirds had a relatively narrow binocular field (~30° that extended above and behind their heads. Their blind area was also relatively narrow (~23°, which increased their visual coverage (about 98% of their celestial hemisphere. Additionally, eye movement amplitude was relatively low (~9°, so their ability to converge or diverge their eyes was limited. We confirmed that hummingbirds have two centers of acute vision: a fovea centralis, projecting laterally, and an area temporalis, projecting more frontally. This retinal configuration is similar to other predatory species, which may allow hummingbirds to enhance their success at preying on insects. However, there is no evidence that their temporal area could visualize the bill tip or that eye movements could compensate for this constraint. Therefore, guidance of precise bill position during the process of docking occurs via indirect cues or directly with low visual acuity despite having a temporal center of acute vision. The large visual coverage may favor the detection of predators and competitors even while docking into a flower. Overall, hummingbird visual configuration does not seem specialized for flower docking.

  1. The Orientation of Visual Space from the Perspective of Hummingbirds.

    Science.gov (United States)

    Tyrrell, Luke P; Goller, Benjamin; Moore, Bret A; Altshuler, Douglas L; Fernández-Juricic, Esteban

    2018-01-01

    Vision is a key component of hummingbird behavior. Hummingbirds hover in front of flowers, guide their bills into them for foraging, and maneuver backwards to undock from them. Capturing insects is also an important foraging strategy for most hummingbirds. However, little is known about the visual sensory specializations hummingbirds use to guide these two foraging strategies. We characterized the hummingbird visual field configuration, degree of eye movement, and orientation of the centers of acute vision. Hummingbirds had a relatively narrow binocular field (~30°) that extended above and behind their heads. Their blind area was also relatively narrow (~23°), which increased their visual coverage (about 98% of their celestial hemisphere). Additionally, eye movement amplitude was relatively low (~9°), so their ability to converge or diverge their eyes was limited. We confirmed that hummingbirds have two centers of acute vision: a fovea centralis , projecting laterally, and an area temporalis , projecting more frontally. This retinal configuration is similar to other predatory species, which may allow hummingbirds to enhance their success at preying on insects. However, there is no evidence that their temporal area could visualize the bill tip or that eye movements could compensate for this constraint. Therefore, guidance of precise bill position during the process of docking occurs via indirect cues or directly with low visual acuity despite having a temporal center of acute vision. The large visual coverage may favor the detection of predators and competitors even while docking into a flower. Overall, hummingbird visual configuration does not seem specialized for flower docking.

  2. Etomidate accurately localizes the epileptic area in patients with temporal lobe epilepsy.

    Science.gov (United States)

    Pastor, Jesús; Wix, Rybel; Meilán, María Luisa; Martínez-Chacón, José Luís; de Dios, Eva; Domínguez-Gadea, Luis; Herrera-Peco, Iván; Sola, Rafael G

    2010-04-01

    A variety of drugs have been used to activate and identify the epileptogenic area in patients during presurgical evaluation. We have evaluated the safety and usefulness of etomidate in identifying the epileptic zone by measuring bioelectrical brain activity and cerebral blood flow (CBF). We studied 13 men and 9 women under presurgical evaluation for temporal lobe epilepsy. We applied etomidate (0.1 mg/kg) while patients were monitored by video-electroencephalography (VEEG) with foramen ovale electrodes. In a subset of 15 patients, we also measured CBF with single photon emission computed tomography (SPECT). (1) Etomidate induced seizures in 2 of 22 patients. (2) The main side-effects observed were myoclonus (14 of 20) and moderate pain (3 of 20). (3) No changes in capillary oxygen saturation, respiration, or heart rate were observed. (4) Irritative activity specifically increased in the temporal mesial and lateral areas. No spikes were observed in other areas, aside from those observed under baseline conditions. (5) Irritative activity induced by etomidate correctly lateralized the ictal onset zone in 19 of 20 patients. In addition, the two etomidate-induced seizures appeared in the same regions as spontaneous ones. (6) The kinetics of pharmacologically induced activity was higher in the region of the ictal-onset zone. (7) Etomidate increased the CBF in the basal ganglia and especially in the posterior hippocampus of the temporal mesial region contralateral to the ictal-onset zone. Etomidate activation is a safe, specific, and quick test that can be used to identify the epileptic region in patients evaluated as candidates for temporal lobe epilepsy surgery.

  3. Locating Temporal Functional Dynamics of Visual Short-Term Memory Binding using Graph Modular Dirichlet Energy

    Science.gov (United States)

    Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre

    2017-02-01

    Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.

  4. Visual and SPM Analysis of Brain Perfusion SPECT in Patients of Dementia with Lewy Bodies with Clinical Correlation

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Do Young; Park, Kyung Won; Kim, Jae Woo [College of Medicine, Univ. of Donga, Busan (Korea, Republic of)

    2003-07-01

    Dementia with Lewy bodies (DLB) is widely recognized as the second commonest form of degenerative dementia. Its core clinical features include persistent visual hallucinosis, fluctuating cognitive impairment and parkinsonism. We evaluated the brain perfusion of dementia with Lewy bodies by SPM analysis and correlated the findings with clinical symptom. Twelve DLB patients (mean age ; 68.88.3 yrs, K-MMSE ; 17.36) and 30 control subjects (mean age ; 60.17.7 yrs) were included. Control subjects were selected by 28 items of exclusion criteria and checked by brain CT or MRI except 3 subjects. Tc-99m HMPAO brain perfusion SPECT was performed and the image data were analyzed by visual interpretation and SPM99 as routine protocol. In visual analysis, 7 patients showed hypoperfusion in both frontal, temporal, parietal and occipital lobe, 2 patients in both frontal, temporal and parietal lobe, 2 patients in both temporal, parietal and occipital lobe, 1 patients in left temporal, parietal and occipital lobe. In SPM analysis (uncorrected p<0.01), significant hypoperfusion was shown in Lt inf. frontal gyrus (B no.47), both inf. parietal lobule (Rt B no.40), Rt parietal lobe (precuneus), both sup. temporal gyrus (Rt B no.42), Rt mid temporal gyrus, Lt transverse temporal gyrus (B no.41), both para hippocampal gyrus, Rt thalamus (pulvinar), both cingulate gyrus (Lt B no.24, Lt B no.25, Rt B no.23, Rt B no.24, Rt B no.33), Rt caudate body, both occipital lobe (cuneus, Lt B no.17, Rt B no.18). All patients had fluctuating cognition and parkinsonism, and 9 patients had visual hallucination. The result of SPM analysis was well correlated with visual interpretation and may be helpful to specify location to correlate with clinical symptom. Significant perfusion deficits in occipital region including visual cortex and visual association area are characteristic findings in DLB. Abnormalities in these areas may be important in understanding symptoms of visual hallucination and

  5. Visual and SPM Analysis of Brain Perfusion SPECT in Patients of Dementia with Lewy Bodies with Clinical Correlation

    International Nuclear Information System (INIS)

    Kang, Do Young; Park, Kyung Won; Kim, Jae Woo

    2003-01-01

    Dementia with Lewy bodies (DLB) is widely recognized as the second commonest form of degenerative dementia. Its core clinical features include persistent visual hallucinosis, fluctuating cognitive impairment and parkinsonism. We evaluated the brain perfusion of dementia with Lewy bodies by SPM analysis and correlated the findings with clinical symptom. Twelve DLB patients (mean age ; 68.88.3 yrs, K-MMSE ; 17.36) and 30 control subjects (mean age ; 60.17.7 yrs) were included. Control subjects were selected by 28 items of exclusion criteria and checked by brain CT or MRI except 3 subjects. Tc-99m HMPAO brain perfusion SPECT was performed and the image data were analyzed by visual interpretation and SPM99 as routine protocol. In visual analysis, 7 patients showed hypoperfusion in both frontal, temporal, parietal and occipital lobe, 2 patients in both frontal, temporal and parietal lobe, 2 patients in both temporal, parietal and occipital lobe, 1 patients in left temporal, parietal and occipital lobe. In SPM analysis (uncorrected p<0.01), significant hypoperfusion was shown in Lt inf. frontal gyrus (B no.47), both inf. parietal lobule (Rt B no.40), Rt parietal lobe (precuneus), both sup. temporal gyrus (Rt B no.42), Rt mid temporal gyrus, Lt transverse temporal gyrus (B no.41), both para hippocampal gyrus, Rt thalamus (pulvinar), both cingulate gyrus (Lt B no.24, Lt B no.25, Rt B no.23, Rt B no.24, Rt B no.33), Rt caudate body, both occipital lobe (cuneus, Lt B no.17, Rt B no.18). All patients had fluctuating cognition and parkinsonism, and 9 patients had visual hallucination. The result of SPM analysis was well correlated with visual interpretation and may be helpful to specify location to correlate with clinical symptom. Significant perfusion deficits in occipital region including visual cortex and visual association area are characteristic findings in DLB. Abnormalities in these areas may be important in understanding symptoms of visual hallucination and

  6. A Pencil Rescues Impaired Performance on a Visual Discrimination Task in Patients with Medial Temporal Lobe Lesions

    Science.gov (United States)

    Knutson, Ashley R.; Hopkins, Ramona O.; Squire, Larry R.

    2013-01-01

    We tested proposals that medial temporal lobe (MTL) structures support not just memory but certain kinds of visual perception as well. Patients with hippocampal lesions or larger MTL lesions attempted to identify the unique object among twin pairs of objects that had a high degree of feature overlap. Patients were markedly impaired under the more…

  7. Visual memory and visual mental imagery recruit common control and sensory regions of the brain.

    Science.gov (United States)

    Slotnick, Scott D; Thompson, William L; Kosslyn, Stephen M

    2012-01-01

    Separate lines of research have shown that visual memory and visual mental imagery are mediated by frontal-parietal control regions and can rely on occipital-temporal sensory regions of the brain. We used fMRI to assess the degree to which visual memory and visual mental imagery rely on the same neural substrates. During the familiarization/study phase, participants studied drawings of objects. During the test phase, words corresponding to old and new objects were presented. In the memory test, participants responded "remember," "know," or "new." In the imagery test, participants responded "high vividness," "moderate vividness," or "low vividness." Visual memory (old-remember) and visual imagery (old-high vividness) were commonly associated with activity in frontal-parietal control regions and occipital-temporal sensory regions. In addition, visual memory produced greater activity than visual imagery in parietal and occipital-temporal regions. The present results suggest that visual memory and visual imagery rely on highly similar--but not identical--cognitive processes.

  8. Alterations of the visual pathways in congenital blindness

    DEFF Research Database (Denmark)

    Ptito, Maurice; Schneider, Fabien C G; Paulson, Olaf B

    2008-01-01

    /19 and the middle temporal cortex (MT) showing volume reductions of up to 20%. Additional significant white matter alterations were observed in the inferior longitudinal tract and in the posterior part of the corpus callosum, which links the visual areas of both hemispheres. Our data indicate that the afferent...... projections to the visual cortex in CB are largely atrophied. Despite the massive volume reductions in the occipital lobes, there is compelling evidence from the literature (reviewed in Noppeney 2007; Ptito and Kupers 2005) that blind subjects activate their visual cortex when performing tasks that involve...

  9. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    OpenAIRE

    Masahiro eKawasaki; Masahiro eKawasaki; Masahiro eKawasaki; Keiichi eKitajo; Keiichi eKitajo; Yoko eYamaguchi

    2014-01-01

    In humans, theta phase (4–8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from...

  10. What You See Is What You Remember: Visual Chunking by Temporal Integration Enhances Working Memory.

    Science.gov (United States)

    Akyürek, Elkan G; Kappelmann, Nils; Volkert, Marc; van Rijn, Hedderik

    2017-12-01

    Human memory benefits from information clustering, which can be accomplished by chunking. Chunking typically relies on expertise and strategy, and it is unknown whether perceptual clustering over time, through temporal integration, can also enhance working memory. The current study examined the attentional and working memory costs of temporal integration of successive target stimulus pairs embedded in rapid serial visual presentation. ERPs were measured as a function of behavioral reports: One target, two separate targets, or two targets reported as a single integrated target. N2pc amplitude, reflecting attentional processing, depended on the actual number of successive targets. The memory-related CDA and P3 components instead depended on the perceived number of targets irrespective of their actual succession. The report of two separate targets was associated with elevated amplitude, whereas integrated as well as actual single targets exhibited lower amplitude. Temporal integration thus provided an efficient means of processing sensory input, offloading working memory so that the features of two targets were consolidated and maintained at a cost similar to that of a single target.

  11. Activation of extrastriate and frontal cortical areas by visual words and word-like stimuli

    International Nuclear Information System (INIS)

    Petersen, S.E.; Fox, P.T.; Snyder, A.Z.; Raichle, M.E.

    1990-01-01

    Visual presentation of words activates extrastriate regions of the occipital lobes of the brain. When analyzed by positron emission tomography (PET), certain areas in the left, medial extrastriate visual cortex were activated by visually presented pseudowords that obey English spelling rules, as well as by actual words. These areas were not activated by nonsense strings of letters or letter-like forms. Thus visual word form computations are based on learned distinctions between words and nonwords. In addition, during passive presentation of words, but not pseudowords, activation occurred in a left frontal area that is related to semantic processing. These findings support distinctions made in cognitive psychology and computational modeling between high-level visual and semantic computations on single words and describe the anatomy that may underlie these distinctions

  12. Visual agnosia and prosopagnosia secondary to melanoma metastases: case report

    Science.gov (United States)

    Frota, Norberto Anízio Ferreira; Pinto, Lécio Figueira; Porto, Claudia Sellitto; de Aguia, Paulo Henrique Pires; Castro, Luiz Henrique Martins; Caramelli, Paulo

    2007-01-01

    The association of visual agnosia and prosopagnosia with cerebral metastasis is very rare. The presence of symmetric and bilateral cerebral metastases of melanoma is also uncommon.We report the case of a 34 year-old man who was admitted to hospital with seizures and a three-month history of headache, with blurred vision during the past month. A previous history of melanoma resection was obtained. CT of the skull showed bilateral heterogeneous hypodense lesions in the occipito-temporal regions, with a ring pattern of contrast enhancement. Surgical resection of both metastatic lesions was performed after which the patient developed visual agnosia and prosopagnosia. On follow-up, he showed partial recovery of visual agnosia, while prosopagnosia was still evident. The relevance of this case is the rare presentation of metastatic malignant melanoma affecting homologous occipito-temporal areas associated with prosopagnosia and associative visual agnosia. PMID:29213375

  13. Temporal dynamics of visual working memory.

    Science.gov (United States)

    Sobczak-Edmans, M; Ng, T H B; Chan, Y C; Chew, E; Chuang, K H; Chen, S H A

    2016-01-01

    The involvement of the human cerebellum in working memory has been well established in the last decade. However, the cerebro-cerebellar network for visual working memory is not as well defined. Our previous fMRI study showed superior and inferior cerebellar activations during a block design visual working memory task, but specific cerebellar contributions to cognitive processes in encoding, maintenance and retrieval have not yet been established. The current study examined cerebellar contributions to each of the components of visual working memory and presence of cerebellar hemispheric laterality was investigated. 40 young adults performed a Sternberg visual working memory task during fMRI scanning using a parametric paradigm. The contrast between high and low memory load during each phase was examined. We found that the most prominent activation was observed in vermal lobule VIIIb and bilateral lobule VI during encoding. Using a quantitative laterality index, we found that left-lateralized activation of lobule VIIIa was present in the encoding phase. In the maintenance phase, there was bilateral lobule VI and right-lateralized lobule VIIb activity. Changes in activation in right lobule VIIIa were present during the retrieval phase. The current results provide evidence that superior and inferior cerebellum contributes to visual working memory, with a tendency for left-lateralized activations in the inferior cerebellum during encoding and right-lateralized lobule VIIb activations during maintenance. The results of the study are in agreement with Baddeley's multi-component working memory model, but also suggest that stored visual representations are additionally supported by maintenance mechanisms that may employ verbal coding. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    Science.gov (United States)

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  16. Imaging cortical activity following affective stimulation with a high temporal and spatial resolution

    Directory of Open Access Journals (Sweden)

    Catani Claudia

    2009-07-01

    Full Text Available Abstract Background The affective and motivational relevance of a stimulus has a distinct impact on cortical processing, particularly in sensory areas. However, the spatial and temporal dynamics of this affective modulation of brain activities remains unclear. The purpose of the present study was the development of a paradigm to investigate the affective modulation of cortical networks with a high temporal and spatial resolution. We assessed cortical activity with MEG using a visual steady-state paradigm with affective pictures. A combination of a complex demodulation procedure with a minimum norm estimation was applied to assess the temporal variation of the topography of cortical activity. Results Statistical permutation analyses of the results of the complex demodulation procedure revealed increased steady-state visual evoked field amplitudes over occipital areas following presentation of affective pictures compared to neutral pictures. This differentiation shifted in the time course from occipital regions to parietal and temporal regions. Conclusion It can be shown that stimulation with affective pictures leads to an enhanced activity in occipital region as compared to neutral pictures. However, the focus of differentiation is not stable over time but shifts into temporal and parietal regions within four seconds of stimulation. Thus, it can be crucial to carefully choose regions of interests and time intervals when analyzing the affective modulation of cortical activity.

  17. Body-Selective Areas in the Visual Cortex are less active in Children than in Adults

    Directory of Open Access Journals (Sweden)

    Paddy D Ross

    2014-11-01

    Full Text Available Our ability to read other people’s non-verbal signals gets refined throughout childhood and adolescence. How this is paralleled by brain development has been investigated mainly with regards to face perception, showing a protracted functional development of the face-selective visual cortical areas. In view of the importance of whole-body expressions in interpersonal communication it is important to understand the development of brain areas sensitive to these social signals.Here we used functional magnetic resonance imaging (fMRI to compare brain activity in a group of 24 children (age 6-11 and 26 adults while they passively watched short videos of body or object movements. We observed activity in similar regions in both groups; namely the extra-striate body area (EBA, fusiform body area (FBA, posterior superior temporal sulcus (pSTS, amygdala and premotor regions. Adults showed additional activity in the inferior frontal gyrus. Within the main body-selective regions (EBA, FBA and pSTS, the strength and spatial extent of fMRI signal change was larger in adults than in children. Multivariate Bayesian analysis showed that the spatial pattern of neural representation within those regions did not change over age.Our results indicate, for the first time, that body perception, like face perception, is still maturing through the second decade of life.

  18. Middle Temporal Gyrus Versus Inferior Temporal Gyrus Transcortical Approaches to High-Grade Astrocytomas in the Mediobasal Temporal Lobe: A Comparison of Outcomes, Functional Restoration, and Surgical Considerations.

    Science.gov (United States)

    Quinones-Hinojosa, Alfredo; Raza, Shaan M; Ahmed, Ishrat; Rincon-Torroella, Jordina; Chaichana, Kaisorn; Olivi, Alessandro

    2017-01-01

    High-grade astrocytomas of the mesial temporal lobe may pose surgical challenges. Several approaches (trans-sylvian, subtemporal, and transcortical) have been designed to circumnavigate the critical neurovascular structures and white fiber tracts that surround this area. Considering the paucity of literature on the transcortical approach for these lesions, we describe our institutional experience with transcortical approaches to Grade III/IV astrocytomas in the mesial temporal lobe. Between 1999 and 2009, 23 patients underwent surgery at the Johns Hopkins Medical Institutions for Grade III/IV astrocytomas involving the mesial temporal lobe (without involvement of the temporal neocortex). Clinical notes, operative records, and imaging were reviewed. Thirteen patients had tumors in the dominant hemisphere. All patients underwent surgery via a transcortical approach (14 via the inferior temporal gyrus and 9 via the middle temporal gyrus). Gross total resection was obtained in 92 % of the cohort. Neurological outcomes were: clinically significant stroke (2 patients), new visual deficits (2 patients), new speech deficit (1 patient); seizure control (53 %). In comparison to reported results in the literature for the transylvian and subtemporal approaches, the transcortical approach may provide the access necessary for a gross total resection with minimal neurological consequences. In our series of patients, there was no statistically significant difference in outcomes between the middle temporal gyrus versus the inferior temporal gyrus trajectories.

  19. Dynamic adjustments in frontal, hippocampal, and inferior temporal interactions with increasing visual working memory load

    OpenAIRE

    Rissman, Jesse; Gazzaley, Adam; D’Esposito, Mark

    2007-01-01

    The active maintenance of visual stimuli across a delay interval in working memory tasks is thought to involve reverberant neural communication between the prefrontal cortex and posterior visual association areas. The hippocampus has also recently been attributed a role in this retention process, presumably via its reciprocal connectivity with visual regions. To characterize the nature of these inter-regional interactions, we applied a recently developed functional connectivity analysis metho...

  20. Endoscopic facelift of the frontal and temporal areas in multiple planes.

    Science.gov (United States)

    Hu, Xiaogen; Ma, Haihuan; Xue, Zhiqiang; Qi, Huijie; Chen, Bo

    2017-02-01

    The detachment planes used in endoscopic facelifts play an important role in determining the results of facial rejuvenation. In this study, we introduced the use of multiple detachment planes for endoscopic facelifts of the frontal and temporal areas, and examined its outcome. This study included 47 patients (38 female, 9 male) who requested frontal and temporal facelifts from January 2009 to January 2014. The technique of dissection in multiple planes was used for all 47 patients. In this technique, the frontal dissection was first carried out in the subgaleal plane, before being changed to the subperiosteal plane about 2 cm above the eyebrow line. Temporal dissection was carried out in both the subcutaneous and subgaleal planes. After detachment, frontal and temporal fixations were achieved using nonabsorbable sutures, and the incisions were closed. During follow-up (ranging from 6-24 months after surgery), the patients were shown their pre- and postoperative images, and asked to rate their satisfaction with the procedure. Complications encountered were documented. All 47 patients had complete recovery without any serious complications. The patient satisfaction rate was 93.6%. Minor complications included dimpling at the suture site, asymmetry, overcorrection, transitory paralysis, late oedema, haematoma, infection, scarring and hair loss. These complications resolved spontaneously and were negligible after complete recovery. Dissection in multiple planes is valuable in frontal and temporal endoscopic facelifts. It may be worthwhile to introduce the use of this technique in frontal and temporal facelifts, as it may lead to improved outcomes. Copyright: © Singapore Medical Association

  1. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation

    Directory of Open Access Journals (Sweden)

    Yingqi Wan

    2018-06-01

    Full Text Available Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion. We demonstrated that both the ensemble (geometric mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last interval elicited more reports of group motion, whereas the shorter mean (or last auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.

  2. Endoscopic extradural supraorbital approach to the temporal pole and adjacent area: technical note.

    Science.gov (United States)

    Komatsu, Fuminari; Imai, Masaaki; Shigematsu, Hideaki; Aoki, Rie; Oda, Shinri; Shimoda, Masami; Matsumae, Mitsunori

    2017-08-25

    The authors' initial experience with the endoscopic extradural supraorbital approach to the temporal pole and adjacent area is reported. Fully endoscopic surgery using the extradural space via a supraorbital keyhole was performed for tumors in or around the temporal pole, including temporal pole cavernous angioma, sphenoid ridge meningioma, and cavernous sinus pituitary adenoma, mainly using 4-mm, 0° and 30° endoscopes and single-shaft instruments. After making a supraorbital keyhole, a 4-mm, 30° endoscope was advanced into the extradural space of the anterior cranial fossa during lifting of the dura mater. Following identification of the sphenoid ridge, orbital roof, and anterior clinoid process, the bone lateral to the orbital roof was drilled off until the dura mater of the anterior aspect of the temporal lobe was exposed. The dura mater of the temporal lobe was incised and opened, exposing the temporal pole under a 4-mm, 0° endoscope. Tumors in or around the temporal pole were safely removed under a superb view through the extradural corridor. The endoscopic extradural supraorbital approach was technically feasible and safe. The anterior trajectory to the temporal pole using the extradural space under endoscopy provided excellent visibility, allowing minimally invasive surgery. Further surgical experience and development of specialized instruments would promote this approach as an alternative surgical option.

  3. Neural correlates of visualizations of concrete and abstract words in preschool children: A developmental embodied approach

    Directory of Open Access Journals (Sweden)

    Amedeo eD'angiulli

    2015-06-01

    Full Text Available The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization, followed by a four-picture array (a target plus three distractors (part 2: matching visualization. Children were to select the picture matching the word they heard in part 1. Event-Related Potentials (ERPs locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e. < 300 ms was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e. 300-699 ms and late (i.e. 700-1000 ms ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a post-anterior pathway sequence: occipital, parietal and temporal areas; conversely, matching visualization involved left-hemispheric activity following an ant-posterior pathway sequence: frontal, temporal, parietal and occipital areas. These results suggest that, similarly for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying

  4. Neural mechanisms underlying temporal modulation of visual perception

    NARCIS (Netherlands)

    Jong, M.C. de

    2015-01-01

    However confident we feel about the way we perceive the visual world around us, there is not a one-to-one relation between visual stimulation and visual perception. Our eyes register reflections of the visual environment and our brain has the difficult task of constructing ‘reality’ from this

  5. The medial temporal lobe supports sensing-based visual working memory.

    Science.gov (United States)

    Goodrich, Robin I; Yonelinas, Andrew P

    2016-08-01

    It is well established that the medial temporal lobe (MTL), including the hippocampus, is essential for long-term memory. In addition, recent studies suggest that the MTL may also support visual working memory (VWM), but the conditions under which the MTL plays a critical role are not yet clear. To address this issue, we used a color change detection paradigm to examine the effects of MTL damage on VWM by analyzing the receiver operating characteristics of patients with MTL damage and healthy age- and education-matched controls. Compared to controls, patients with MTL damage demonstrated significant reductions in VWM accuracy. Importantly, the patients were not impaired at making accurate high-confidence judgments that a change had occurred; however, they were impaired when making low-confidence responses indicating that they sensed whether or not there had been a visual change. Moreover, these impairments were observed under conditions that emphasized the retrieval of complex bindings or the retrieval of high-resolution bindings. That is, patients with MTL damage exhibited VWM impairments when they were required to remember either a larger number of low-resolution bindings (i.e., set size of 5 and obvious color changes) or a smaller number of high-resolution bindings (i.e., set size of 3 and subtle color changes). The results indicate that only some VWM processes are dependent on the MTL, and are consistent with the proposal that the MTL plays a critical role in forming complex, high-resolution bindings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Visual abilities in two raptors with different ecology.

    Science.gov (United States)

    Potier, Simon; Bonadonna, Francesco; Kelber, Almut; Martin, Graham R; Isard, Pierre-François; Dulaurent, Thomas; Duriez, Olivier

    2016-09-01

    Differences in visual capabilities are known to reflect differences in foraging behaviour even among closely related species. Among birds, the foraging of diurnal raptors is assumed to be guided mainly by vision but their foraging tactics include both scavenging upon immobile prey and the aerial pursuit of highly mobile prey. We studied how visual capabilities differ between two diurnal raptor species of similar size: Harris's hawks, Parabuteo unicinctus, which take mobile prey, and black kites, Milvus migrans, which are primarily carrion eaters. We measured visual acuity, foveal characteristics and visual fields in both species. Visual acuity was determined using a behavioural training technique; foveal characteristics were determined using ultra-high resolution spectral-domain optical coherence tomography (OCT); and visual field parameters were determined using an ophthalmoscopic reflex technique. We found that these two raptors differ in their visual capacities. Harris's hawks have a visual acuity slightly higher than that of black kites. Among the five Harris's hawks tested, individuals with higher estimated visual acuity made more horizontal head movements before making a decision. This may reflect an increase in the use of monocular vision. Harris's hawks have two foveas (one central and one temporal), while black kites have only one central fovea and a temporal area. Black kites have a wider visual field than Harris's hawks. This may facilitate the detection of conspecifics when they are scavenging. These differences in the visual capabilities of these two raptors may reflect differences in the perceptual demands of their foraging behaviours. © 2016. Published by The Company of Biologists Ltd.

  7. Patterns of morphological integration between parietal and temporal areas in the human skull.

    Science.gov (United States)

    Bruner, Emiliano; Pereira-Pedro, Ana Sofia; Bastir, Markus

    2017-10-01

    Modern humans have evolved bulging parietal areas and large, projecting temporal lobes. Both changes, largely due to a longitudinal expansion of these cranial and cerebral elements, were hypothesized to be the result of brain evolution and cognitive variations. Nonetheless, the independence of these two morphological characters has not been evaluated. Because of structural and functional integration among cranial elements, changes in the position of the temporal poles can be a secondary consequence of parietal bulging and reorientation of the head axis. In this study, we use geometric morphometrics to test the correlation between parietal shape and the morphology of the endocranial base in a sample of adult modern humans. Our results suggest that parietal proportions show no correlation with the relative position of the temporal poles within the spatial organization of the endocranial base. The vault and endocranial base are likely to be involved in distinct morphogenetic processes, with scarce or no integration between these two districts. Therefore, the current evidence rejects the hypothesis of reciprocal morphological influences between parietal and temporal morphology, suggesting that evolutionary spatial changes in these two areas may have been independent. However, parietal bulging exerts a visible effect on the rotation of the cranial base, influencing head position and orientation. This change can have had a major relevance in the reorganization of the head functional axis. © 2017 Wiley Periodicals, Inc.

  8. Effects of gabapentin on experimental somatic pain and temporal summation

    DEFF Research Database (Denmark)

    Arendt-Nielsen, Lars; Frøkjaer, Jens Brøndum; Staahl, Camilla

    2007-01-01

    at 2 Hz); (2) stimulus-response function relating pain intensity scores (visual analog scale, VAS) to increasing current intensities for electrical skin and muscle stimuli (single and repeated, determined at baseline); and (3) the pain intensity (VAS) and pain areas after intramuscular injection......, was to examine the effect of a single dose of 1200 mg gabapentin on multi-modal experimental cutaneous and muscle pain models. METHODS: The following pain models were applied: (1) pain thresholds to single and repeated cutaneous and intramuscular electrical stimulation (temporal summation to 5 stimuli delivered...... reduced the area under the pain intensity curve to hypertonic saline injections in the muscle (P = .02); and (3) significantly reduced the area of pain evoked by hypertonic saline (P = .03). CONCLUSIONS: Gabapentin reduces temporal summation of skin stimuli at pain threshold intensities; this may have...

  9. Temporal Oculomotor Inhibition of Return and Spatial Facilitation of Return in a Visual Encoding Task

    Directory of Open Access Journals (Sweden)

    Steven G Luke

    2013-07-01

    Full Text Available Oculomotor inhibition of return (O-IOR is an increase in saccade latency prior to an eye movement to a recently fixated location compared to other locations. It has been proposed that this temporal O-IOR may have spatial consequences, facilitating foraging by inhibiting return to previously attended regions. In order to test this possibility, participants viewed arrays of objects and of words while their eye movements were recorded. Temporal O-IOR was observed, with equivalent effects for object and word arrays, indicating that temporal O-IOR is an oculomotor phenomenon independent of array content. There was no evidence for spatial inhibition of return. Instead, spatial facilitation of return was observed: Participants were significantly more likely than chance to make return saccades and to refixate just-visited locations. Further, the likelihood of making a return saccade to an object or word was contingent on the amount of time spent viewing that object or word before leaving it. This suggests that, unlike temporal O-IOR, return probability is influenced by cognitive processing. Taken together, these results are inconsistent with the hypothesis that inhibition of return functions as a foraging facilitator. The results also provide strong evidence for a different oculomotor bias that could serve as a foraging facilitator: saccadic momentum, a tendency to repeat the most recently executed saccade program. We suggest that models of visual attention could incorporate saccadic momentum in place of inhibition of return.

  10. Validation of exposure visualization and audible distance emission for navigated temporal bone drilling in phantoms.

    Directory of Open Access Journals (Sweden)

    Eduard H J Voormolen

    Full Text Available BACKGROUND: A neuronavigation interface with extended function as compared with current systems was developed to aid during temporal bone surgery. The interface, named EVADE, updates the prior anatomical image and visualizes the bone drilling process virtually in real-time without need for intra-operative imaging. Furthermore, EVADE continuously calculates the distance from the drill tip to segmented temporal bone critical structures (e.g. the sigmoid sinus and facial nerve and produces audiovisual warnings if the surgeon drills in too close vicinity. The aim of this study was to evaluate the accuracy and surgical utility of EVADE in physical phantoms. METHODOLOGY/PRINCIPAL FINDINGS: We performed 228 measurements assessing the position accuracy of tracking a navigated drill in the operating theatre. A mean target registration error of 1.33±0.61 mm with a maximum error of 3.04 mm was found. Five neurosurgeons each drilled two temporal bone phantoms, once using EVADE, and once using a standard neuronavigation interface. While using standard neuronavigation the surgeons damaged three modeled temporal bone critical structures. No structure was hit by surgeons utilizing EVADE. Surgeons felt better orientated and thought they had improved tumor exposure with EVADE. Furthermore, we compared the distances between surface meshes of the virtual drill cavities created by EVADE to actual drill cavities: average maximum errors of 2.54±0.49 mm and -2.70±0.48 mm were found. CONCLUSIONS/SIGNIFICANCE: These results demonstrate that EVADE gives accurate feedback which reduces risks of harming modeled critical structures compared to a standard neuronavigation interface during temporal bone phantom drilling.

  11. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    Science.gov (United States)

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  12. Visual agnosia and prosopagnosia secondary to melanoma metástases: Case report

    Directory of Open Access Journals (Sweden)

    Norberto Anízio Ferreira Frota

    Full Text Available Abstract The association of visual agnosia and prosopagnosia with cerebral metastasis is very rare. The presence of symmetric and bilateral cerebral metastases of melanoma is also uncommon. We report the case of a 34 year-old man who was admitted to hospital with seizures and a three-month history of headache, with blurred vision during the past month. A previous history of melanoma resection was obtained. CT of the skull showed bilateral heterogeneous hypodense lesions in the occipito-temporal regions, with a ring pattern of contrast enhancement. Surgical resection of both metastatic lesions was performed after which the patient developed visual agnosia and prosopagnosia. On follow-up, he showed partial recovery of visual agnosia, while prosopagnosia was still evident. The relevance of this case is the rare presentation of metastatic malignant melanoma affecting homologous occipito-temporal areas associated with prosopagnosia and associative visual agnosia.

  13. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    Science.gov (United States)

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. How Fast Do Objects Fall in Visual Memory? Uncovering the Temporal and Spatial Features of Representational Gravity.

    Science.gov (United States)

    De Sá Teixeira, Nuno

    2016-01-01

    Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.

  15. Learning to associate orientation with color in early visual areas by associative decoded fMRI neurofeedback

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-01-01

    Summary Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded functional magnetic resonance imaging (fMRI) neurofeedback, termed “DecNef” [9], we tested whether associative learning of color and orientation can be created in early visual areas. During three days' training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive “red” significantly more frequently than “green” in an achromatic vertical grating. This effect was also observed 3 to 5 months after the training. These results suggest that long-term associative learning of the two different visual features such as color and orientation was created most likely in early visual areas. This newly extended technique that induces associative learning is called “A(ssociative)-DecNef” and may be used as an important tool for understanding and modifying brain functions, since associations are fundamental and ubiquitous functions in the brain. PMID:27374335

  16. Learning to Associate Orientation with Color in Early Visual Areas by Associative Decoded fMRI Neurofeedback.

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-07-25

    Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded fMRI neurofeedback termed "DecNef" [9], we tested whether associative learning of orientation and color can be created in early visual areas. During 3 days of training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive "red" significantly more frequently than "green" in an achromatic vertical grating. This effect was also observed 3-5 months after the training. These results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas. This newly extended technique that induces associative learning is called "A-DecNef," and it may be used as an important tool for understanding and modifying brain functions because associations are fundamental and ubiquitous functions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Visual exploration of big spatio-temporal urban data: a study of New York City taxi trips.

    Science.gov (United States)

    Ferreira, Nivan; Poco, Jorge; Vo, Huy T; Freire, Juliana; Silva, Cláudio T

    2013-12-01

    As increasing volumes of urban data are captured and become available, new opportunities arise for data-driven analysis that can lead to improvements in the lives of citizens through evidence-based decision making and policies. In this paper, we focus on a particularly important urban data set: taxi trips. Taxis are valuable sensors and information associated with taxi trips can provide unprecedented insight into many different aspects of city life, from economic activity and human behavior to mobility patterns. But analyzing these data presents many challenges. The data are complex, containing geographical and temporal components in addition to multiple variables associated with each trip. Consequently, it is hard to specify exploratory queries and to perform comparative analyses (e.g., compare different regions over time). This problem is compounded due to the size of the data-there are on average 500,000 taxi trips each day in NYC. We propose a new model that allows users to visually query taxi trips. Besides standard analytics queries, the model supports origin-destination queries that enable the study of mobility across the city. We show that this model is able to express a wide range of spatio-temporal queries, and it is also flexible in that not only can queries be composed but also different aggregations and visual representations can be applied, allowing users to explore and compare results. We have built a scalable system that implements this model which supports interactive response times; makes use of an adaptive level-of-detail rendering strategy to generate clutter-free visualization for large results; and shows hidden details to the users in a summary through the use of overlay heat maps. We present a series of case studies motivated by traffic engineers and economists that show how our model and system enable domain experts to perform tasks that were previously unattainable for them.

  18. The temporal dynamics of visual working memory guidance of selective attention

    Directory of Open Access Journals (Sweden)

    Jinfeng eTan

    2014-09-01

    Full Text Available The biased competition model proposes that there is top-down directing of attention to a stimulus matching the contents of working memory (WM, even when the maintenance of a WM representation is detrimental to target relevant performance. Despite many studies elucidating that spatial WM guidance can be present early in the visual processing system, whether visual WM guidance also influences perceptual selection remains poorly understood. Here, we investigated the electrophysiological correlates of early guidance of attention by WM in humans. Participants were required to perform a visual search task while concurrently maintaining object representations in their visual working memory. Behavioral results showed that response times (RTs were longer when the distractor in the visual search task was held in WM. The earliest WM guidance effect was observed in the P1 component (90-130 ms, with match trials eliciting larger P1 amplitude than mismatch trials. A similar result was also found in the N1 component (160-200 ms. These P1 and N1 effects could not be attributed to bottom-up perceptual priming from the presentation of a memory cue, because there was no significant difference in early ERP component when the cue was merely perceptually identified but not actively held in working memory. Standardized Low Resolution Electrical Tomography Analysis (sLORETA showed that the early WM guidance occurred in the occipital lobe and the N1-related activation occurred in the parietal gyrus. Time-frequency data suggested that alpha-band event-related spectral perturbation (ERSP magnitudes increased under the match condition compared with the mismatch condition. In conclusion, the present study suggests that the reappearance of a stimulus held in WM enhanced activity in the occipital area. Subsequently, this initial capture of attention by WM could be inhibited by competing visual inputs through attention re-orientation, reflecting by the alpha-band rhythm.

  19. Integrating sentiment analysis and term associations with geo-temporal visualizations on customer feedback streams

    Science.gov (United States)

    Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun

    2012-01-01

    Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).

  20. Visualization of NO2 emission sources using temporal and spatial pattern analysis in Asia

    Science.gov (United States)

    Schütt, A. M. N.; Kuhlmann, G.; Zhu, Y.; Lipkowitsch, I.; Wenig, M.

    2016-12-01

    Nitrogen dioxide (NO2) is an indicator for population density and level of development, but the contributions of the different emission sources to the overall concentrations remains mostly unknown. In order to allocate fractions of OMI NO2 to emission types, we investigate several temporal cycles and regional patterns.Our analysis is based on daily maps of tropospheric NO2 vertical column densities (VCDs) from the Ozone Monitoring Instrument (OMI). The data set is mapped to a high resolution grid by a histopolation algorithm. This algorithm is based on a continuous parabolic spline, producing more realistic smooth distributions while reproducing the measured OMI values when integrating over ground pixel areas.In the resulting sequence of zoom in maps, we analyze weekly and annual cycles for cities, countryside and highways in China, Japan and Korea Republic and look for patterns and trends and compare the derived results to emission sources in Middle Europe and North America. Due to increased heating in winter compared to summer and more traffic during the week than on Sundays, we dissociate traffic, heating and power plants and visualized maps with different sources. We will also look into the influence of emission control measures during big events like the Olympic Games 2008 and the World Expo 2010 as a possibility to confirm our classification of NO2 emission sources.

  1. High temporal resolution photography for observing riparian area use and grazing behavior

    Science.gov (United States)

    In 2014, a 2.4 hectare site within the Apache-Sitgreaves National Forest in northeastern Arizona, USA was selected to characterize temporal and spatial patterns of riparian area use. Three consecutive 30, 8, and 46 day time periods representing 1) unrestricted access, 2) prescribed cattle use, and 3...

  2. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    Science.gov (United States)

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent

  3. Wide-area, real-time monitoring and visualization system

    Science.gov (United States)

    Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.

    2013-03-19

    A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.

  4. Google-Earth Based Visualizations for Environmental Flows and Pollutant Dispersion in Urban Areas

    Directory of Open Access Journals (Sweden)

    Daoming Liu

    2017-03-01

    Full Text Available In the present study, we address the development and application of an efficient tool for conversion of results obtained by an integrated computational fluid dynamics (CFD and computational reaction dynamics (CRD approach and their visualization in the Google Earth. We focus on results typical for environmental fluid mechanics studies at a city scale that include characteristic wind flow patterns and dispersion of reactive scalars. This is achieved by developing a code based on the Java language, which converts the typical four-dimensional structure (spatial and temporal dependency of data results in the Keyhole Markup Language (KML format. The visualization techniques most often used are revisited and implemented into the conversion tool. The potential of the tool is demonstrated in a case study of smog formation due to an intense traffic emission in Rotterdam (The Netherlands. It is shown that the Google Earth can provide a computationally efficient and user-friendly means of data representation. This feature can be very useful for visualization of pollution at street levels, which is of great importance for the city residents. Various meteorological and traffic emissions can be easily visualized and analyzed, providing a powerful, user-friendly tool for traffic regulations and urban climate adaptations.

  5. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  6. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  7. Monitoring, analyzing and simulating of spatial-temporal changes of landscape pattern over mining area

    Science.gov (United States)

    Liu, Pei; Han, Ruimei; Wang, Shuangting

    2014-11-01

    According to the merits of remotely sensed data in depicting regional land cover and Land changes, multi- objective information processing is employed to remote sensing images to analyze and simulate land cover in mining areas. In this paper, multi-temporal remotely sensed data were selected to monitor the pattern, distri- bution and trend of LUCC and predict its impacts on ecological environment and human settlement in mining area. The monitor, analysis and simulation of LUCC in this coal mining areas are divided into five steps. The are information integration of optical and SAR data, LULC types extraction with SVM classifier, LULC trends simulation with CA Markov model, landscape temporal changes monitoring and analysis with confusion matrixes and landscape indices. The results demonstrate that the improved data fusion algorithm could make full use of information extracted from optical and SAR data; SVM classifier has an efficient and stable ability to obtain land cover maps, which could provide a good basis for both land cover change analysis and trend simulation; CA Markov model is able to predict LULC trends with good performance, and it is an effective way to integrate remotely sensed data with spatial-temporal model for analysis of land use / cover change and corresponding environmental impacts in mining area. Confusion matrixes are combined with landscape indices to evaluation and analysis show that, there was a sustained downward trend in agricultural land and bare land, but a continues growth trend tendency in water body, forest and other lands, and building area showing a wave like change, first increased and then decreased; mining landscape has undergone a from small to large and large to small process of fragmentation, agricultural land is the strongest influenced landscape type in this area, and human activities are the primary cause, so the problem should be pay more attentions by government and other organizations.

  8. Phase-Amplitude Coupling and Long-Range Phase Synchronization Reveal Frontotemporal Interactions during Visual Working Memory.

    Science.gov (United States)

    Daume, Jonathan; Gruber, Thomas; Engel, Andreas K; Friese, Uwe

    2017-01-11

    It has been suggested that cross-frequency phase-amplitude coupling (PAC), particularly in temporal brain structures, serves as a neural mechanism for coordinated working memory storage. In this magnetoencephalography study, we show that during visual working memory maintenance, temporal cortex regions, which exhibit enhanced PAC, interact with prefrontal cortex via enhanced low-frequency phase synchronization. Healthy human participants were engaged in a visual delayed match-to-sample task with pictures of natural objects. During the delay period, we observed increased spectral power of beta (20-28 Hz) and gamma (40-94 Hz) bands as well as decreased power of theta/alpha band (7-9 Hz) oscillations in visual sensory areas. Enhanced PAC between the phases of theta/alpha and the amplitudes of beta oscillations was found in the left inferior temporal cortex (IT), an area known to be involved in visual object memory. Furthermore, the IT was functionally connected to the prefrontal cortex by increased low-frequency phase synchronization within the theta/alpha band. Together, these results point to a mechanism in which the combination of PAC and long-range phase synchronization subserves enhanced large-scale brain communication. They suggest that distant brain regions might coordinate their activity in the low-frequency range to engage local stimulus-related processing in higher frequencies via the combination of long-range, within-frequency phase synchronization and local cross-frequency PAC. Working memory maintenance, like other cognitive functions, requires the coordinated engagement of brain areas in local and large-scale networks. However, the mechanisms by which spatially distributed brain regions share and combine information remain primarily unknown. We show that the combination of long-range, low-frequency phase synchronization and local cross-frequency phase-amplitude coupling might serve as a mechanism to coordinate memory processes across distant brain areas

  9. Asymmetric Temporal Integration of Layer 4 and Layer 2/3 Inputs in Visual Cortex

    OpenAIRE

    Hang, Giao B.; Dan, Yang

    2010-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices...

  10. Photon hunting in the twilight zone: visual features of mesopelagic bioluminescent sharks.

    Directory of Open Access Journals (Sweden)

    Julien M Claes

    Full Text Available The mesopelagic zone is a visual scene continuum in which organisms have developed various strategies to optimize photon capture. Here, we used light microscopy, stereology-assisted retinal topographic mapping, spectrophotometry and microspectrophotometry to investigate the visual ecology of deep-sea bioluminescent sharks [four etmopterid species (Etmopterus lucifer, E. splendidus, E. spinax and Trigonognathus kabeyai and one dalatiid species (Squaliolus aliae]. We highlighted a novel structure, a translucent area present in the upper eye orbit of Etmopteridae, which might be part of a reference system for counterillumination adjustment or acts as a spectral filter for camouflage breaking, as well as several ocular specialisations such as aphakic gaps and semicircular tapeta previously unknown in elasmobranchs. All species showed pure rod hexagonal mosaics with a high topographic diversity. Retinal specialisations, formed by shallow cell density gradients, may aid in prey detection and reflect lifestyle differences; pelagic species display areae centrales while benthopelagic and benthic species display wide and narrow horizontal streaks, respectively. One species (E. lucifer displays two areae within its horizontal streak that likely allows detection of conspecifics' elongated bioluminescent flank markings. Ganglion cell topography reveals less variation with all species showing a temporal area for acute frontal binocular vision. This area is dorsally extended in T. kabeyai, allowing this species to adjust the strike of its peculiar jaws in the ventro-frontal visual field. Etmopterus lucifer showed an additional nasal area matching a high rod density area. Peak spectral sensitivities of the rod visual pigments (λmax fall within the range 484-491 nm, allowing these sharks to detect a high proportion of photons present in their habitat. Comparisons with previously published data reveal ocular differences between bioluminescent and non

  11. Photon hunting in the twilight zone: visual features of mesopelagic bioluminescent sharks.

    Science.gov (United States)

    Claes, Julien M; Partridge, Julian C; Hart, Nathan S; Garza-Gisholt, Eduardo; Ho, Hsuan-Ching; Mallefet, Jérôme; Collin, Shaun P

    2014-01-01

    The mesopelagic zone is a visual scene continuum in which organisms have developed various strategies to optimize photon capture. Here, we used light microscopy, stereology-assisted retinal topographic mapping, spectrophotometry and microspectrophotometry to investigate the visual ecology of deep-sea bioluminescent sharks [four etmopterid species (Etmopterus lucifer, E. splendidus, E. spinax and Trigonognathus kabeyai) and one dalatiid species (Squaliolus aliae)]. We highlighted a novel structure, a translucent area present in the upper eye orbit of Etmopteridae, which might be part of a reference system for counterillumination adjustment or acts as a spectral filter for camouflage breaking, as well as several ocular specialisations such as aphakic gaps and semicircular tapeta previously unknown in elasmobranchs. All species showed pure rod hexagonal mosaics with a high topographic diversity. Retinal specialisations, formed by shallow cell density gradients, may aid in prey detection and reflect lifestyle differences; pelagic species display areae centrales while benthopelagic and benthic species display wide and narrow horizontal streaks, respectively. One species (E. lucifer) displays two areae within its horizontal streak that likely allows detection of conspecifics' elongated bioluminescent flank markings. Ganglion cell topography reveals less variation with all species showing a temporal area for acute frontal binocular vision. This area is dorsally extended in T. kabeyai, allowing this species to adjust the strike of its peculiar jaws in the ventro-frontal visual field. Etmopterus lucifer showed an additional nasal area matching a high rod density area. Peak spectral sensitivities of the rod visual pigments (λmax) fall within the range 484-491 nm, allowing these sharks to detect a high proportion of photons present in their habitat. Comparisons with previously published data reveal ocular differences between bioluminescent and non-bioluminescent deep

  12. Functional size of human visual area V1: a neural correlate of top-down attention.

    Science.gov (United States)

    Verghese, Ashika; Kolbe, Scott C; Anderson, Andrew J; Egan, Gary F; Vidyasagar, Trichur R

    2014-06-01

    Heavy demands are placed on the brain's attentional capacity when selecting a target item in a cluttered visual scene, or when reading. It is widely accepted that such attentional selection is mediated by top-down signals from higher cortical areas to early visual areas such as the primary visual cortex (V1). Further, it has also been reported that there is considerable variation in the surface area of V1. This variation may impact on either the number or specificity of attentional feedback signals and, thereby, the efficiency of attentional mechanisms. In this study, we investigated whether individual differences between humans performing attention-demanding tasks can be related to the functional area of V1. We found that those with a larger representation in V1 of the central 12° of the visual field as measured using BOLD signals from fMRI were able to perform a serial search task at a faster rate. In line with recent suggestions of the vital role of visuo-spatial attention in reading, the speed of reading showed a strong positive correlation with the speed of visual search, although it showed little correlation with the size of V1. The results support the idea that the functional size of the primary visual cortex is an important determinant of the efficiency of selective spatial attention for simple tasks, and that the attentional processing required for complex tasks like reading are to a large extent determined by other brain areas and inter-areal connections. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  14. Neural Mechanisms Underlying Visual Short-Term Memory Gain for Temporally Distinct Objects.

    Science.gov (United States)

    Ihssen, Niklas; Linden, David E J; Miller, Claire E; Shapiro, Kimron L

    2015-08-01

    Recent research has shown that visual short-term memory (VSTM) can substantially be improved when the to-be-remembered objects are split in 2 half-arrays (i.e., sequenced) or the entire array is shown twice (i.e., repeated), rather than presented simultaneously. Here we investigate the hypothesis that sequencing and repeating displays overcomes attentional "bottlenecks" during simultaneous encoding. Using functional magnetic resonance imaging, we show that sequencing and repeating displays increased brain activation in extrastriate and primary visual areas, relative to simultaneous displays (Study 1). Passively viewing identical stimuli did not increase visual activation (Study 2), ruling out a physical confound. Importantly, areas of the frontoparietal attention network showed increased activation in repetition but not in sequential trials. This dissociation suggests that repeating a display increases attentional control by allowing attention to be reallocated in a second encoding episode. In contrast, sequencing the array poses fewer demands on control, with competition from nonattended objects being reduced by the half-arrays. This idea was corroborated by a third study in which we found optimal VSTM for sequential displays minimizing attentional demands. Importantly these results provide support within the same experimental paradigm for the role of stimulus-driven and top-down attentional control aspects of biased competition theory in setting constraints on VSTM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Network Analysis of Foramen Ovale Electrode Recordings in Drug-resistant Temporal Lobe Epilepsy Patients

    Science.gov (United States)

    Sanz-García, Ancor; Vega-Zelaya, Lorena; Pastor, Jesús; Torres, Cristina V.; Sola, Rafael G.; Ortega, Guillermo J.

    2016-01-01

    Approximately 30% of epilepsy patients are refractory to antiepileptic drugs. In these cases, surgery is the only alternative to eliminate/control seizures. However, a significant minority of patients continues to exhibit post-operative seizures, even in those cases in which the suspected source of seizures has been correctly localized and resected. The protocol presented here combines a clinical procedure routinely employed during the pre-operative evaluation of temporal lobe epilepsy (TLE) patients with a novel technique for network analysis. The method allows for the evaluation of the temporal evolution of mesial network parameters. The bilateral insertion of foramen ovale electrodes (FOE) into the ambient cistern simultaneously records electrocortical activity at several mesial areas in the temporal lobe. Furthermore, network methodology applied to the recorded time series tracks the temporal evolution of the mesial networks both interictally and during the seizures. In this way, the presented protocol offers a unique way to visualize and quantify measures that considers the relationships between several mesial areas instead of a single area. PMID:28060326

  16. Language and Visual Perception Associations: Meta-Analytic Connectivity Modeling of Brodmann Area 37

    OpenAIRE

    Ardila, Alfredo; Bernal, Byron; Rosselli, Monica

    2015-01-01

    Background. Understanding the functions of different brain areas has represented a major endeavor of neurosciences. Historically, brain functions have been associated with specific cortical brain areas; however, modern neuroimaging developments suggest cognitive functions are associated to networks rather than to areas. Objectives. The purpose of this paper was to analyze the connectivity of Brodmann area (BA) 37 (posterior, inferior, and temporal/fusiform gyrus) in relation to (1) language a...

  17. Posterior superior temporal sulcus responses predict perceived pleasantness of skin stroking

    Directory of Open Access Journals (Sweden)

    Monika Davidovic

    2016-09-01

    Full Text Available Love and affection is expressed through a range of physically intimate gestures, including caresses. Recent studies suggest that posterior temporal lobe areas typically associated with visual processing of social cues also respond to interpersonal touch. Here, we asked whether these areas are selective to caress-like skin stroking. We collected functional magnetic resonance imaging (fMRI data from 23 healthy participants and compared brain responses to skin stroking and vibration. We did not find any significant differences between stroking and vibration in the posterior temporal lobe; however, right posterior superior temporal sulcus (pSTS responses predicted healthy participant's perceived pleasantness of skin stroking, but not vibration. These findings link right pSTS responses to individual variability in perceived pleasantness of caress-like tactile stimuli. We speculate that the right pSTS may play a role in the translation of tactile stimuli into positively valenced, socially relevant interpersonal touch and that this system may be affected in disorders associated with impaired attachment.

  18. Pairwise comparisons and visual perceptions of equal area polygons.

    Science.gov (United States)

    Adamic, P; Babiy, V; Janicki, R; Kakiashvili, T; Koczkodaj, W W; Tadeusiewicz, R

    2009-02-01

    The number of studies related to visual perception has been plentiful in recent years. Participants rated the areas of five randomly generated shapes of equal area, using a reference unit area that was displayed together with the shapes. Respondents were 179 university students from Canada and Poland. The average error estimated by respondents using the unit square was 25.75%. The error was substantially decreased to 5.51% when the shapes were compared to one another in pairs. This gain of 20.24% for this two-dimensional experiment was substantially better than the 11.78% gain reported in the previous one-dimensional experiments. This is the first statistically sound two-dimensional experiment demonstrating that pairwise comparisons improve accuracy.

  19. A neural model of the temporal dynamics of figure-ground segregation in motion perception.

    Science.gov (United States)

    Raudies, Florian; Neumann, Heiko

    2010-03-01

    How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy

  20. Differential metabolic rates in prefrontal and temporal Brodmann areas in schizophrenia and schizotypal personality disorder.

    Science.gov (United States)

    Buchsbaum, Monte S; Nenadic, Igor; Hazlett, Erin A; Spiegel-Cohen, Jacqueline; Fleischman, Michael B; Akhavan, Arash; Silverman, Jeremy M; Siever, Larry J

    2002-03-01

    In an exploration of the schizophrenia spectrum, we compared cortical metabolic rates in unmedicated patients with schizophrenia and schizotypal personality disorder (SPD) with findings in age- and sex-matched normal volunteers. Coregistered magnetic resonance imaging (MRI) and positron emission tomography (PET) scans were obtained in 27 schizophrenic, 13 SPD, and 32 normal volunteers who performed a serial verbal learning test during tracer uptake. A template of Brodmann areas derived from a whole brain histological section atlas was used to analyze PET findings. Significantly lower metabolic rates were found in prefrontal areas 44-46 in schizophrenic patients than in normal volunteers. SPD patients did not differ from normal volunteers in most lateral frontal regions, but they had values intermediate between those of normal volunteers and schizophrenic patients in lateral temporal regions. SPD patients showed higher than normal metabolic rates in both medial frontal and medial temporal areas. Metabolic rates in Brodmann area 10 were distinctly higher in SPD patients than in either normal volunteers or schizophrenic patients.

  1. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    Directory of Open Access Journals (Sweden)

    Jordi Navarra

    Full Text Available The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1 or from different spatial positions (Experiment 2. A simultaneity judgment task (SJ was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony was obtained using temporal order judgments (TOJs instead of SJs (Experiment 3. Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading that we most frequently encounter in the outside world (e.g., while perceiving distant events.

  2. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  3. Steady-state visually evoked potential correlates of human body perception.

    Science.gov (United States)

    Giabbiconi, Claire-Marie; Jurilj, Verena; Gruber, Thomas; Vocks, Silja

    2016-11-01

    In cognitive neuroscience, interest in the neuronal basis underlying the processing of human bodies is steadily increasing. Based on functional magnetic resonance imaging studies, it is assumed that the processing of pictures of human bodies is anchored in a network of specialized brain areas comprising the extrastriate and the fusiform body area (EBA, FBA). An alternative to examine the dynamics within these networks is electroencephalography, more specifically so-called steady-state visually evoked potentials (SSVEPs). In SSVEP tasks, a visual stimulus is presented repetitively at a predefined flickering rate and typically elicits a continuous oscillatory brain response at this frequency. This brain response is characterized by an excellent signal-to-noise ratio-a major advantage for source reconstructions. The main goal of present study was to demonstrate the feasibility of this method to study human body perception. To that end, we presented pictures of bodies and contrasted the resulting SSVEPs to two control conditions, i.e., non-objects and pictures of everyday objects (chairs). We found specific SSVEPs amplitude differences between bodies and both control conditions. Source reconstructions localized the SSVEP generators to a network of temporal, occipital and parietal areas. Interestingly, only body perception resulted in activity differences in middle temporal and lateral occipitotemporal areas, most likely reflecting the EBA/FBA.

  4. The temporal dynamics of visual working memory guidance of selective attention.

    Science.gov (United States)

    Tan, Jinfeng; Zhao, Yuanfang; Wu, Shanshan; Wang, Lijun; Hitchman, Glenn; Tian, Xia; Li, Ming; Hu, Li; Chen, Antao

    2014-01-01

    The biased competition model proposes that there is top-down directing of attention to a stimulus matching the contents of working memory (WM), even when the maintenance of a WM representation is detrimental to target relevant performance. Despite many studies elucidating that spatial WM guidance can be present early in the visual processing system, whether visual WM guidance also influences perceptual selection remains poorly understood. Here, we investigated the electrophysiological correlates of early guidance of attention by WM in humans. Participants were required to perform a visual search task while concurrently maintaining object representations in their visual WM. Behavioral results showed that response times (RTs) were longer when the distractor in the visual search task was held in WM. The earliest WM guidance effect was observed in the P1 component (90-130 ms), with match trials eliciting larger P1 amplitude than mismatch trials. A similar result was also found in the N1 component (160-200 ms). These P1 and N1 effects could not be attributed to bottom-up perceptual priming from the presentation of a memory cue, because there was no significant difference in early event-related potential (ERP) component when the cue was merely perceptually identified but not actively held in WM. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the early WM guidance occurred in the occipital lobe and the N1-related activation occurred in the parietal gyrus. Time-frequency data suggested that alpha-band event-related spectral perturbation (ERSP) magnitudes increased under the match condition compared with the mismatch condition only when the cue was held in WM. In conclusion, the present study suggests that the reappearance of a stimulus held in WM enhanced activity in the occipital area. Subsequently, this initial capture of attention by WM could be inhibited by competing visual inputs through attention re-orientation, reflecting by the

  5. Lateralisation with magnetic resonance spectroscopic imaging in temporal lobe epilepsy: an evaluation of visual and region-of-interest analysis of metabolite concentration images

    Energy Technology Data Exchange (ETDEWEB)

    Vikhoff-Baaz, B. [Sahlgrenska University Hospital, Goeteborg (Sweden); Div. of Medical Physics and Biomedical Engineering, Goeteborg Univ. (Sweden); Goeteborg Univ. (Sweden). Dept. of Radiation Physics; Malmgren, K. [Dept. of Neurology, Goeteborg Univ. (Sweden); Joensson, L.; Ekholm, S. [Dept. of Radiology, Goeteborg Univ. (Sweden); Starck, G. [Div. of Medical Physics and Biomedical Engineering, Goeteborg Univ. (Sweden); Ljungberg, M.; Forssell-Aronsson, E. [Goeteborg Univ. (Sweden). Dept. of Radiation Physics; Uvebrant, P. [Dept. of Paediatrics, Goeteborg Univ. (Sweden)

    2001-09-01

    We carried out spectroscopic imaging (MRSI) on nine consecutive patients with temporal lobe epilepsy being assessed for epilepsy surgery, and nine neurologically healthy, age-matched volunteers. A volume of interest (VOI) was angled along the temporal horns on axial and sagittal images, and symmetrically over the temporal lobes on coronal images. Images showing the concentrations of N-acetylaspartate (NAA) and of choline-containing compounds plus creatine and phosphocreatine (Cho + Cr) were used for lateralisation. We compared assessment by visual inspection and by signal analysis from regions of interest (ROI) in different positions, where side-to-side differences in NAA/(Cho + Cr) ratio were used for lateralisation. The NAA/(Cho + Cr) ratio from the different ROI was also compared with that in the brain stem to assess if the latter could be used as an internal reference, e. g., for identification of bilateral changes. The metabolite concentration images were found useful for lateralisation of temporal lobe abnormalities related to epilepsy. Visual analysis can, with high accuracy, be used routinely. ROI analysis is useful for quantifying changes, giving more quantitative information about spatial distribution and the degree of signal loss. There was a large variation in NAA/(Cho + Cr) values in both patients and volunteers. The brain stem may be used as a reference for identification of bilateral changes. (orig.)

  6. Lateralisation with magnetic resonance spectroscopic imaging in temporal lobe epilepsy: an evaluation of visual and region-of-interest analysis of metabolite concentration images

    International Nuclear Information System (INIS)

    Vikhoff-Baaz, B.; Joensson, L.; Ekholm, S.; Starck, G.

    2001-01-01

    We carried out spectroscopic imaging (MRSI) on nine consecutive patients with temporal lobe epilepsy being assessed for epilepsy surgery, and nine neurologically healthy, age-matched volunteers. A volume of interest (VOI) was angled along the temporal horns on axial and sagittal images, and symmetrically over the temporal lobes on coronal images. Images showing the concentrations of N-acetylaspartate (NAA) and of choline-containing compounds plus creatine and phosphocreatine (Cho + Cr) were used for lateralisation. We compared assessment by visual inspection and by signal analysis from regions of interest (ROI) in different positions, where side-to-side differences in NAA/(Cho + Cr) ratio were used for lateralisation. The NAA/(Cho + Cr) ratio from the different ROI was also compared with that in the brain stem to assess if the latter could be used as an internal reference, e. g., for identification of bilateral changes. The metabolite concentration images were found useful for lateralisation of temporal lobe abnormalities related to epilepsy. Visual analysis can, with high accuracy, be used routinely. ROI analysis is useful for quantifying changes, giving more quantitative information about spatial distribution and the degree of signal loss. There was a large variation in NAA/(Cho + Cr) values in both patients and volunteers. The brain stem may be used as a reference for identification of bilateral changes. (orig.)

  7. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  8. Visualizing Metrics on Areas of Interest in Software Architecture Diagrams

    NARCIS (Netherlands)

    Byelas, Heorhiy; Telea, Alexandru; Eades, P; Ertl, T; Shen, HW

    2009-01-01

    We present a new method for the combined visualization of software architecture diagrams, Such as UML class diagrams or component diagrams, and software metrics defined on groups of diagram elements. Our method extends an existing rendering technique for the so-called areas of interest in system

  9. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300-600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our

  10. Multisensory speech perception without the left superior temporal sulcus.

    Science.gov (United States)

    Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S

    2012-09-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Science.gov (United States)

    Emadi, Nazli; Rajimehr, Reza; Esteky, Hossein

    2014-01-01

    Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance. PMID:25404900

  12. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  13. Usefulness of medial temporal lobe atrophy visual rating scale in detecting Alzheimer′s disease: Preliminary study

    Directory of Open Access Journals (Sweden)

    Jae-Hyeok Heo

    2013-01-01

    Full Text Available Background: The Korean version of Mini-Mental Status Examination (K-MMSE and the Korean version of Addenbrooke Cognitive Examination (K-ACE have been validated as quick neuropsychological tests for screening dementia in various clinical settings. Medial temporal atrophy (MTA is an early pathological characteristic of Alzheimer′s disease (AD. We aimed to assess the diagnostic validity of the fusion of the neuropsychological test and visual rating scale (VRS of MTA in AD. Materials and Methods: A total of fifty subjects (25 AD, 25 controls were included. The neuropsychological tests used were the K-MMSE and the K-ACE. T1 axial imaging visual rating scale (VRS was applied for assessing the grade of MTA. We calculated the fusion score with the difference of neuropsychological test and VRS of MTA. The receiver operating characteristics (ROC curve was used to determine optimal cut-off score, sensitivity and specificity of the fusion scores in screening AD. Results: No significant differences in age, gender and education were found between AD and control group. The values of K-MMSE, K-ACE, CDR, VRS and cognitive function test minus VRS were significantly lower in the AD group than control group. The AUC (Area under the curve, sensitivity and specificity for K-MMSE minus VRS were 0.857, 84% and 80% and for K-ACE minus VRS were 0.884, 80% and 88%, respectively. Those for K-MMSE only were 0.842, 76% and 72% and for K-ACE only were 0.868, 80% and 88%, respectively. Conclusions: The fusion of the neuropsychological test and VRS suggested clinical usefulness in their easy and superiority over neuropsychological test only. However, this study failed to find any difference. This may be because of small numbers in the study or because there is no true difference.

  14. Area vs. density: influence of visual variables and cardinality knowledge in early number comparison.

    Science.gov (United States)

    Abreu-Mendoza, Roberto A; Soto-Alba, Elia E; Arias-Trejo, Natalia

    2013-01-01

    Current research in the number development field has focused in individual differences regarding the acuity of children's approximate number system (ANS). The most common task to evaluate children's acuity is through non-symbolic numerical comparison. Efforts have been made to prevent children from using perceptual cues by controlling the visual properties of the stimuli (e.g., density, contour length, and area); nevertheless, researchers have used these visual controls interchangeably. Studies have also tried to understand the relation between children's cardinality knowledge and their performance in a number comparison task; divergent results may in fact be rooted in the use of different visual controls. The main goal of the present study is to explore how the usage of different visual controls (density, total filled area, and correlated and anti-correlated area) affects children's performance in a number comparison task, and its relationship to children's cardinality knowledge. For that purpose, 77 preschoolers participated in three tasks: (1) counting list elicitation to test whether children could recite the counting list up to ten, (2) give a number to evaluate children's cardinality knowledge, and (3) number comparison to evaluate their ability to compare two quantities. During this last task, children were asked to point at the set with more geometric figures when two sets were displayed on a screen. Children were exposed only to one of the three visual controls. Results showed that overall, children performed above chance in the number comparison task; nonetheless, density was the easiest control, while correlated and anti-correlated area was the most difficult in most cases. Only total filled area was sensitive to discriminate cardinal principal knowers from non-cardinal principal knowers. How this finding helps to explain conflicting evidence from previous research, and how the present outcome relates to children's number word knowledge is discussed.

  15. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    Science.gov (United States)

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  16. Differential contribution of early visual areas to the perceptual process of contour processing.

    Science.gov (United States)

    Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A

    2004-04-01

    We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.

  17. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    Science.gov (United States)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  18. Conscious and nonconscious memory effects are temporally dissociable.

    Science.gov (United States)

    Slotnick, Scott D; Schacter, Daniel L

    2010-03-01

    Intentional (explicit) retrieval can reactivate sensory cortex, which is widely assumed to reflect conscious processing. In the present study, we used an explicit visual memory event-related potential paradigm to investigate whether such retrieval related sensory activity could be separated into conscious and nonconscious components. During study, abstract shapes were presented in the left or right visual field. During test, old and new shapes were presented centrally and participants classified each shape as "old-left", "old-right", or "new". Conscious activity was isolated by comparing accurate memory for shape and location (old-hits) with forgotten shapes (old-misses), and nonconscious activity was isolated by comparing old-left-misses with old-right-misses and vice versa. Conscious visual sensory activity had a late temporal onset (after 800 ms) while nonconscious visual sensory activity had an early temporal onset (before 800 ms). These results suggest explicit memory related sensory activity reflects both conscious and nonconscious processes that are temporally dissociable.

  19. Neural Correlates of Body and Face Perception Following Bilateral Destruction of the Primary Visual Cortices

    Directory of Open Access Journals (Sweden)

    Jan eVan den Stock

    2014-02-01

    Full Text Available Non-conscious visual processing of different object categories was investigated in a rare patient with bilateral destruction of the visual cortex (V1 and clinical blindness over the entire visual field. Images of biological and non-biological object categories were presented consisting of human bodies, faces, butterflies, cars, and scrambles. Behaviorally, only the body shape induced higher perceptual sensitivity, as revealed by signal detection analysis. Passive exposure to bodies and faces activated amygdala and superior temporal sulcus. In addition, bodies also activated the extrastriate body area, insula, orbitofrontal cortex (OFC and cerebellum. The results show that following bilateral damage to the primary visual cortex and ensuing complete cortical blindness, the human visual system is able to process categorical properties of human body shapes. This residual vision may be based on V1-independent input to body-selective areas along the ventral stream, in concert with areas involved in the representation of bodily states, like insula, OFC and cerebellum.

  20. Earthquake precursors: spatial-temporal gravity changes before the great earthquakes in the Sichuan-Yunnan area

    Science.gov (United States)

    Zhu, Yi-Qing; Liang, Wei-Feng; Zhang, Song

    2018-01-01

    Using multiple-scale mobile gravity data in the Sichuan-Yunnan area, we systematically analyzed the relationships between spatial-temporal gravity changes and the 2014 Ludian, Yunnan Province Ms6.5 earthquake and the 2014 Kangding Ms6.3, 2013 Lushan Ms7.0, and 2008 Wenchuan Ms8.0 earthquakes in Sichuan Province. Our main results are as follows. (1) Before the occurrence of large earthquakes, gravity anomalies occur in a large area around the epicenters. The directions of gravity change gradient belts usually agree roughly with the directions of the main fault zones of the study area. Such gravity changes might reflect the increase of crustal stress, as well as the significant active tectonic movements and surface deformations along fault zones, during the period of gestation of great earthquakes. (2) Continuous significant changes of the multiple-scale gravity fields, as well as greater gravity changes with larger time scales, can be regarded as medium-range precursors of large earthquakes. The subsequent large earthquakes always occur in the area where the gravity changes greatly. (3) The spatial-temporal gravity changes are very useful in determining the epicenter of coming large earthquakes. The large gravity networks are useful to determine the general areas of coming large earthquakes. However, the local gravity networks with high spatial-temporal resolution are suitable for determining the location of epicenters. Therefore, denser gravity observation networks are necessary for better forecasts of the epicenters of large earthquakes. (4) Using gravity changes from mobile observation data, we made medium-range forecasts of the Kangding, Ludian, Lushan, and Wenchuan earthquakes, with especially successful forecasts of the location of their epicenters. Based on the above discussions, we emphasize that medium-/long-term potential for large earthquakes might exist nowadays in some areas with significant gravity anomalies in the study region. Thus, the monitoring

  1. Associative-memory representations emerge as shared spatial patterns of theta activity spanning the primate temporal cortex.

    Science.gov (United States)

    Nakahara, Kiyoshi; Adachi, Ken; Kawasaki, Keisuke; Matsuo, Takeshi; Sawahata, Hirohito; Majima, Kei; Takeda, Masaki; Sugiyama, Sayaka; Nakata, Ryota; Iijima, Atsuhiko; Tanigawa, Hisashi; Suzuki, Takafumi; Kamitani, Yukiyasu; Hasegawa, Isao

    2016-06-10

    Highly localized neuronal spikes in primate temporal cortex can encode associative memory; however, whether memory formation involves area-wide reorganization of ensemble activity, which often accompanies rhythmicity, or just local microcircuit-level plasticity, remains elusive. Using high-density electrocorticography, we capture local-field potentials spanning the monkey temporal lobes, and show that the visual pair-association (PA) memory is encoded in spatial patterns of theta activity in areas TE, 36, and, partially, in the parahippocampal cortex, but not in the entorhinal cortex. The theta patterns elicited by learned paired associates are distinct between pairs, but similar within pairs. This pattern similarity, emerging through novel PA learning, allows a machine-learning decoder trained on theta patterns elicited by a particular visual item to correctly predict the identity of those elicited by its paired associate. Our results suggest that the formation and sharing of widespread cortical theta patterns via learning-induced reorganization are involved in the mechanisms of associative memory representation.

  2. Area-specific temporal control of corticospinal motor neuron differentiation by COUP-TFI

    Science.gov (United States)

    Tomassy, Giulio Srubek; De Leonibus, Elvira; Jabaudon, Denis; Lodato, Simona; Alfano, Christian; Mele, Andrea; Macklis, Jeffrey D.; Studer, Michèle

    2010-01-01

    Transcription factors with gradients of expression in neocortical progenitors give rise to distinct motor and sensory cortical areas by controlling the area-specific differentiation of distinct neuronal subtypes. However, the molecular mechanisms underlying this area-restricted control are still unclear. Here, we show that COUP-TFI controls the timing of birth and specification of corticospinal motor neurons (CSMN) in somatosensory cortex via repression of a CSMN differentiation program. Loss of COUP-TFI function causes an area-specific premature generation of neurons with cardinal features of CSMN, which project to subcerebral structures, including the spinal cord. Concurrently, genuine CSMN differentiate imprecisely and do not project beyond the pons, together resulting in impaired skilled motor function in adult mice with cortical COUP-TFI loss-of-function. Our findings indicate that COUP-TFI exerts critical areal and temporal control over the precise differentiation of CSMN during corticogenesis, thereby enabling the area-specific functional features of motor and sensory areas to arise. PMID:20133588

  3. MRI findings of temporal lobe epilepsy

    International Nuclear Information System (INIS)

    Nakahara, Ichiro; Yin, Dali; Fukami, Masahiro; Kondo, Seiji; Takeuchi, Juji; Kanemoto, Kousuke; Sengoku, Akira; Kawai, Itsuo

    1992-01-01

    MRI findings were analyzed retrospectively in 46 patients with temporal lobe epilepsy in which the side of epileptogenic focus had been confirmed by EEG studies. T 1 - and T 2 -weighted images were obtained by the use of a 1.0 or 1.5 T superconducting-type MRI machine with a coronal scan perpendicular to the axis of the temporal horn of the lateral ventricle. Additional axial and sagittal scans were performed in some cases. The area of the hippocampal body was measured quantitatively using a computerized image-analysis system in 26 cases in which the hippocampus had been visualized with enough contrast on T 1 -weighted coronal images. Abnormal findings were observed in 31/46 (67%) cases. Hippocampal (HC) and temporal lobe (TL) atrophy were observed in 18/46 (39%) and 23/46 (50%) cases respectively, and the side of the atrophy corresponded with the side of the epileptogenic focus, as confirmed by EEG studies, with specificities of 89% and 74% respectively. A quantitative measurement of the area of the hippocampal body showed unilateral hippocampal atrophy more than 10% in 18/25 (69%) cases (10-25%: 10 cases, 25-50%: 7 cases, 50% 2 abnormality was observed in only 4 cases. Structural lesions were observed in 4 cases including an arachnoid cyst, an astrocytoma in amygdala, the Dandy-Walker syndrome, and tuberous sclerosis, using the more efficient imaging qualities than the CT scan. From these observations, it is apparant that superconducting MRI is extremely useful in the diagnosis of the epileptogenic topography of temporal lobe epilepsy. Particularly, hippocampal atrophy was found to correspond with the side of the epileptogenic focus on EEG with a high specificity; its quantitative evaluation could be one of the most important standards in detecting the operative indications for temporal lobe epilepsy. (author)

  4. Impaired temporal, not just spatial, resolution in amblyopia.

    Science.gov (United States)

    Spang, Karoline; Fahle, Manfred

    2009-11-01

    In amblyopia, neuronal deficits deteriorate spatial vision including visual acuity, possibly because of a lack of use-dependent fine-tuning of afferents to the visual cortex during infancy; but temporal processing may deteriorate as well. Temporal, rather than spatial, resolution was investigated in patients with amblyopia by means of a task based on time-defined figure-ground segregation. Patients had to indicate the quadrant of the visual field where a purely time-defined square appeared. The results showed a clear decrease in temporal resolution of patients' amblyopic eyes compared with the dominant eyes in this task. The extent of this decrease in figure-ground segregation based on time of motion onset only loosely correlated with the decrease in spatial resolution and spanned a smaller range than did the spatial loss. Control experiments with artificially induced blur in normal observers confirmed that the decrease in temporal resolution was not simply due to the acuity loss. Amblyopia not only decreases spatial resolution, but also temporal factors such as time-based figure-ground segregation, even at high stimulus contrasts. This finding suggests that the realm of neuronal processes that may be disturbed in amblyopia is larger than originally thought.

  5. 3D City Models with Different Temporal Characteristica

    DEFF Research Database (Denmark)

    Bodum, Lars

    2005-01-01

    traditional static city models and those models that are built for realtime applications. The difference between the city models applies both to the spatial modelling and also when using the phenomenon time in the models. If the city models are used in visualizations without any variation in time or when......-built dynamic or a model suitable for visualization in realtime, it is required that modelling is done with level-of-detail and simplification of both the aesthetics and the geometry. If a temporal characteristic is combined with a visual characteristic, the situation can easily be seen as a t/v matrix where t...... is the temporal characteristic or representation and v is the visual characteristic or representation....

  6. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    Science.gov (United States)

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  7. Deep Learning Predicts Correlation between a Functional Signature of Higher Visual Areas and Sparse Firing of Neurons

    Directory of Open Access Journals (Sweden)

    Chengxu Zhuang

    2017-10-01

    Full Text Available Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1. However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised, receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.

  8. Pitting temporal against spatial integration in schizophrenic patients.

    Science.gov (United States)

    Herzog, Michael H; Brand, Andreas

    2009-06-30

    Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.

  9. The neuropsychological and neuroradiological correlates of slowly progressive visual agnosia.

    Science.gov (United States)

    Giovagnoli, Anna Rita; Aresi, Anna; Reati, Fabiola; Riva, Alice; Gobbo, Clara; Bizzi, Alberto

    2009-04-01

    The case of a 64-year-old woman affected by slowly progressive visual agnosia is reported aiming to describe specific cognitive-brain relationships. Longitudinal clinical and neuropsychological assessment, combined with magnetic resonance imaging (MRI), spectroscopy, and positron emission tomography (PET) were used. Sequential neuropsychological evaluations performed during a period of 9 years since disease onset showed the appearance of apperceptive and associative visual agnosia, alexia without agraphia, agraphia, finger agnosia, and prosopoagnosia, but excluded dementia. MRI showed moderate diffuse cortical atrophy, with predominant atrophy in the left posterior cortical areas (temporal, parietal, and lateral occipital cortical gyri). 18FDG-PET showed marked bilateral posterior cortical hypometabolism; proton magnetic resonance spectroscopic imaging disclosed severe focal N-acetyl-aspartate depletion in the left temporoparietal and lateral occipital cortical areas. In conclusion, selective metabolic alterations and neuronal loss in the left temporoparietooccipital cortex may determine progressive visual agnosia in the absence of dementia.

  10. Encoding in the visual word form area: an fMRI adaptation study of words versus handwriting.

    Science.gov (United States)

    Barton, Jason J S; Fox, Christopher J; Sekunova, Alla; Iaria, Giuseppe

    2010-08-01

    Written texts are not just words but complex multidimensional stimuli, including aspects such as case, font, and handwriting style, for example. Neuropsychological reports suggest that left fusiform lesions can impair the reading of text for word (lexical) content, being associated with alexia, whereas right-sided lesions may impair handwriting recognition. We used fMRI adaptation in 13 healthy participants to determine if repetition-suppression occurred for words but not handwriting in the left visual word form area (VWFA) and the reverse in the right fusiform gyrus. Contrary to these expectations, we found adaptation for handwriting but not for words in both the left VWFA and the right VWFA homologue. A trend to adaptation for words but not handwriting was seen only in the left middle temporal gyrus. An analysis of anterior and posterior subdivisions of the left VWFA also failed to show any adaptation for words. We conclude that the right and the left fusiform gyri show similar patterns of adaptation for handwriting, consistent with a predominantly perceptual contribution to text processing.

  11. Presentation of spatio-temporal data in the context of information capacity and visual suggestiveness

    Science.gov (United States)

    Cybulski, Paweł

    2014-12-01

    The aim of this article is to present the concept of information capacity and visual suggestiveness as a map characteristic on the example of two maps of human migration. From this viewpoint the literature study has been performed. Proposed by the author the features of cartographic visualization are an attempt to establish cartographic pragmatics and find the way to increase effectiveness of dynamic maps with large information capacity. Among the works on cartographic pragmatics, muliaspectuality of spatio-temporal data the proposed solution has not been taken so far, and refers to the map design problematic. Celem rozważań było podsumowanie wiedzy dotyczącej projektowania dynamicznych opracowań przestrzennych oraz ich klasyfi kacja ze względu na ilość zmiennych grafi cznych oraz dynamicznych, które mogą zostać użyte w procesie geowizualizacji. Zróżnicowanie ilości zmiennych grafi cznych i dynamicznych w przestrzennych wizualizacjach autor proponuje nazywać pojemnością wizualną prezentacji. Autor stawia również hipotezę, że im większą pojemność wizualną stosujemy tym bardziej sugestywne musi być to przestawienie, aby efektywność przekazywania informacji była zachowana

  12. High-fidelity haptic and visual rendering for patient-specific simulation of temporal bone surgery.

    Science.gov (United States)

    Chan, Sonny; Li, Peter; Locketz, Garrett; Salisbury, Kenneth; Blevins, Nikolas H

    2016-12-01

    Medical imaging techniques provide a wealth of information for surgical preparation, but it is still often the case that surgeons are examining three-dimensional pre-operative image data as a series of two-dimensional images. With recent advances in visual computing and interactive technologies, there is much opportunity to provide surgeons an ability to actively manipulate and interpret digital image data in a surgically meaningful way. This article describes the design and initial evaluation of a virtual surgical environment that supports patient-specific simulation of temporal bone surgery using pre-operative medical image data. Computational methods are presented that enable six degree-of-freedom haptic feedback during manipulation, and that simulate virtual dissection according to the mechanical principles of orthogonal cutting and abrasive wear. A highly efficient direct volume renderer simultaneously provides high-fidelity visual feedback during surgical manipulation of the virtual anatomy. The resulting virtual surgical environment was assessed by evaluating its ability to replicate findings in the operating room, using pre-operative imaging of the same patient. Correspondences between surgical exposure, anatomical features, and the locations of pathology were readily observed when comparing intra-operative video with the simulation, indicating the predictive ability of the virtual surgical environment.

  13. Topographic organization of areas V3 and V4 and its relation to supra-areal organization of the primate visual system.

    Science.gov (United States)

    Arcaro, M J; Kastner, S

    2015-01-01

    Areas V3 and V4 are commonly thought of as individual entities in the primate visual system, based on definition criteria such as their representation of visual space, connectivity, functional response properties, and relative anatomical location in cortex. Yet, large-scale functional and anatomical organization patterns not only emphasize distinctions within each area, but also links across visual cortex. Specifically, the visuotopic organization of V3 and V4 appears to be part of a larger, supra-areal organization, clustering these areas with early visual areas V1 and V2. In addition, connectivity patterns across visual cortex appear to vary within these areas as a function of their supra-areal eccentricity organization. This complicates the traditional view of these regions as individual functional "areas." Here, we will review the criteria for defining areas V3 and V4 and will discuss functional and anatomical studies in humans and monkeys that emphasize the integration of individual visual areas into broad, supra-areal clusters that work in concert for a common computational goal. Specifically, we propose that the visuotopic organization of V3 and V4, which provides the criteria for differentiating these areas, also unifies these areas into the supra-areal organization of early visual cortex. We propose that V3 and V4 play a critical role in this supra-areal organization by filtering information about the visual environment along parallel pathways across higher-order cortex.

  14. Temporal Evolution and Dose-Volume Histogram Predictors of Visual Acuity After Proton Beam Radiation Therapy of Uveal Melanoma

    Energy Technology Data Exchange (ETDEWEB)

    Polishchuk, Alexei L. [Department of Radiation Oncology, University of California, San Francisco, San Francisco, California (United States); Mishra, Kavita K., E-mail: Kavita.Mishra@ucsf.edu [Department of Radiation Oncology, University of California, San Francisco, San Francisco, California (United States); Weinberg, Vivian; Daftari, Inder K. [Department of Radiation Oncology, University of California, San Francisco, San Francisco, California (United States); Nguyen, Jacqueline M.; Cole, Tia B. [Tumori Foundation, San Francisco, California (United States); Quivey, Jeanne M.; Phillips, Theodore L. [Department of Radiation Oncology, University of California, San Francisco, San Francisco, California (United States); Char, Devron H. [Tumori Foundation, San Francisco, California (United States)

    2017-01-01

    Purpose: To perform an in-depth temporal analysis of visual acuity (VA) outcomes after proton beam radiation therapy (PBRT) in a large, uniformly treated cohort of uveal melanoma (UM) patients, to determine trends in VA evolution depending on pretreatment and temporally defined posttreatment VA measurements; and to investigate the relevance of specific patient, tumor and dose-volume parameters to posttreatment vision loss. Methods and Materials: Uveal melanoma patients receiving PBRT were identified from a prospectively maintained database. Included patients (n=645) received 56 GyE in 4 fractions, had pretreatment best corrected VA (BCVA) in the affected eye of count fingers (CF) or better, with posttreatment VA assessment at specified post-PBRT time point(s). Patients were grouped according to the pretreatment BCVA into favorable (≥20/40) or unfavorable (20/50-20/400) and poor (CF) strata. Temporal analysis of BCVA changes was described, and univariate and forward stepwise multivariate logistic regression analyses were performed to identify predictors for VA loss. Results: Median VA follow-up was 53 months (range, 3-213 months). At 60-month follow up, among evaluable treated eyes with favorable pretreatment BCVA, 45% retained BCVA ≥20/40, whereas among evaluable treated eyes with initially unfavorable/poor BCVA, 21% had vision ≥20/100. Among those with a favorable initial BCVA, attaining BCVA of ≥20/40 at any posttreatment time point was associated with subsequent maintenance of excellent BCVA. Multivariate analysis identified volume of the macula receiving 28GyE (P<.0001) and optic nerve (P=.0004) as independent dose-volume histogram predictors of 48-month post-PBRT vision loss among initially favorable treated eyes. Conclusions: Approximately half of PBRT-treated UM eyes with excellent pretreatment BCVA assessed at 5 years after treatment will retain excellent long-term vision. 28GyE macula and optic nerve dose-volume histogram parameters allow for

  15. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  16. Video game players show more precise multisensory temporal processing abilities.

    Science.gov (United States)

    Donohue, Sarah E; Woldorff, Marty G; Mitroff, Stephen R

    2010-05-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. In the present study, we examined whether video game players' benefits generalize beyond vision to multisensory processing by presenting auditory and visual stimuli within a short temporal window to video game players and non-video game players. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.

  17. Brain activity related to integrative processes in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Aaside, C T; Humphreys, G W

    2002-01-01

    We report evidence from a PET activation study that the inferior occipital gyri (likely to include area V2) and the posterior parts of the fusiform and inferior temporal gyri are involved in the integration of visual elements into perceptual wholes (single objects). Of these areas, the fusiform a......) that perceptual and memorial processes can be dissociated on both functional and anatomical grounds. No evidence was obtained for the involvement of the parietal lobes in the integration of single objects....

  18. Robust selectivity to two-object images in human visual cortex

    Science.gov (United States)

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  19. The integration of temporally shifted visual feedback in a synchronization task: The role of perceptual stability in a visuo-proprioceptive conflict situation.

    Science.gov (United States)

    Ceux, Tanja; Montagne, Gilles; Buekers, Martinus J

    2010-12-01

    The present study examined whether the beneficial role of coherently grouped visual motion structures for performing complex (interlimb) coordination patterns can be generalized to synchronization behavior in a visuo-proprioceptive conflict situation. To achieve this goal, 17 participants had to synchronize a self-moved circle, representing the arm movement, with a visual target signal corresponding to five temporally shifted visual feedback conditions (0%, 25%, 50%, 75%, and 100% of the target cycle duration) in three synchronization modes (in-phase, anti-phase, and intermediate). The results showed that the perception of a newly generated perceptual Gestalt between the visual feedback of the arm and the target signal facilitated the synchronization performance in the preferred in-phase synchronization mode in contrast to the less stable anti-phase and intermediate mode. Our findings suggest that the complexity of the synchronization mode defines to what extent the visual and/or proprioceptive information source affects the synchronization performance in the present unimanual synchronization task. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Representations of temporal information in short-term memory: Are they modality-specific?

    Science.gov (United States)

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Assessing Temporal Stability for Coarse Scale Satellite Moisture Validation in the Maqu Area, Tibet

    Science.gov (United States)

    Bhatti, Haris Akram; Rientjes, Tom; Verhoef, Wouter; Yaseen, Muhammad

    2013-01-01

    This study evaluates if the temporal stability concept is applicable to a time series of satellite soil moisture images so to extend the common procedure of satellite image validation. The area of study is the Maqu area, which is located in the northeastern part of the Tibetan plateau. The network serves validation purposes of coarse scale (25–50 km) satellite soil moisture products and comprises 20 stations with probes installed at depths of 5, 10, 20, 40, 80 cm. The study period is 2009. The temporal stability concept is applied to all five depths of the soil moisture measuring network and to a time series of satellite-based moisture products from the Advance Microwave Scanning Radiometer (AMSR-E). The in-situ network is also assessed by Pearsons's correlation analysis. Assessments by the temporal stability concept proved to be useful and results suggest that probe measurements at 10 cm depth best match to the satellite observations. The Mean Relative Difference plot for satellite pixels shows that a RMSM pixel can be identified but in our case this pixel does not overlay any in-situ station. Also, the RMSM pixel does not overlay any of the Representative Mean Soil Moisture (RMSM) stations of the five probe depths. Pearson's correlation analysis on in-situ measurements suggests that moisture patterns over time are more persistent than over space. Since this study presents first results on the application of the temporal stability concept to a series of satellite images, we recommend further tests to become more conclusive on effectiveness to broaden the procedure of satellite validation. PMID:23959237

  2. Rubber hand illusion under delayed visual feedback.

    Directory of Open Access Journals (Sweden)

    Sotaro Shimada

    Full Text Available BACKGROUND: Rubber hand illusion (RHI is a subject's illusion of the self-ownership of a rubber hand that was touched synchronously with their own hand. Although previous studies have confirmed that this illusion disappears when the rubber hand was touched asynchronously with the subject's hand, the minimum temporal discrepancy of these two events for attenuation of RHI has not been examined. METHODOLOGY/PRINCIPAL FINDINGS: In this study, various temporal discrepancies between visual and tactile stimulations were introduced by using a visual feedback delay experimental setup, and RHI effects in each temporal discrepancy condition were systematically tested. The results showed that subjects felt significantly greater RHI effects with temporal discrepancies of less than 300 ms compared with longer temporal discrepancies. The RHI effects on reaching performance (proprioceptive drift showed similar conditional differences. CONCLUSIONS/SIGNIFICANCE: Our results first demonstrated that a temporal discrepancy of less than 300 ms between visual stimulation of the rubber hand and tactile stimulation to the subject's own hand is preferable to induce strong sensation of RHI. We suggest that the time window of less than 300 ms is critical for multi-sensory integration processes constituting the self-body image.

  3. Comparison of visual receptive fields in the dorsolateral prefrontal cortex and ventral intraparietal area in macaques.

    Science.gov (United States)

    Viswanathan, Pooja; Nieder, Andreas

    2017-12-01

    The concept of receptive field (RF) describes the responsiveness of neurons to sensory space. Neurons in the primate association cortices have long been known to be spatially selective but a detailed characterisation and direct comparison of RFs between frontal and parietal association cortices are missing. We sampled the RFs of a large number of neurons from two interconnected areas of the frontal and parietal lobes, the dorsolateral prefrontal cortex (dlPFC) and ventral intraparietal area (VIP), of rhesus monkeys by systematically presenting a moving bar during passive fixation. We found that more than half of neurons in both areas showed spatial selectivity. Single neurons in both areas could be assigned to five classes according to the spatial response patterns: few non-uniform RFs with multiple discrete response maxima could be dissociated from the vast majority of uniform RFs showing a single maximum; the latter were further classified into full-field and confined foveal, contralateral and ipsilateral RFs. Neurons in dlPFC showed a preference for the contralateral visual space and collectively encoded the contralateral visual hemi-field. In contrast, VIP neurons preferred central locations, predominantly covering the foveal visual space. Putative pyramidal cells with broad-spiking waveforms in PFC had smaller RFs than putative interneurons showing narrow-spiking waveforms, but distributed similarly across the visual field. In VIP, however, both putative pyramidal cells and interneurons had similar RFs at similar eccentricities. We provide a first, thorough characterisation of visual RFs in two reciprocally connected areas of a fronto-parietal cortical network. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Neuronal correlate of pictorial short-term memory in the primate temporal cortexYasushi Miyashita

    Science.gov (United States)

    Miyashita, Yasushi; Chang, Han Soo

    1988-01-01

    It has been proposed that visual-memory traces are located in the temporal lobes of the cerebral cortex, as electric stimulation of this area in humans results in recall of imagery1. Lesions in this area also affect recognition of an object after a delay in both humans2,3 and monkeys4-7 indicating a role in short-term memory of images8. Single-unit recordings from the temporal cortex have shown that some neurons continue to fire when one of two or four colours are to be remembered temporarily9. But neuronal responses selective to specific complex objects10-18 , including hands10,13 and faces13,16,17, cease soon after the offset of stimulus presentation10-18. These results led to the question of whether any of these neurons could serve the memory of complex objects. We report here a group of shape-selective neurons in an anterior ventral part of the temporal cortex of monkeys that exhibited sustained activity during the delay period of a visual short-term memory task. The activity was highly selective for the pictorial information to be memorized and was independent of the physical attributes such as size, orientation, colour or position of the object. These observations show that the delay activity represents the short-term memory of the categorized percept of a picture.

  5. Visual rating of medial temporal lobe metabolism in mild cognitive impairment and Alzheimer's disease using FDG-PET

    International Nuclear Information System (INIS)

    Mosconi, Lisa; Santi, Susan De; Li, Yi; Li, Juan; Zhan, Jiong; Boppana, Madhu; Tsui, Wai Hon; Leon, Mony J. de; Pupi, Alberto

    2006-01-01

    This study was designed to examine the utility of visual inspection of medial temporal lobe (MTL) metabolism in the diagnosis of mild cognitive impairment (MCI) and Alzheimer's disease (AD) using FDG-PET scans. Seventy-five subjects [27 normal controls (NL), 26 MCI, and 22 AD] with FDG-PET and MRI scans were included in this study. We developed a four-point visual rating scale to evaluate the presence and severity of MTL hypometabolism on FDG-PET scans. The visual MTL ratings were compared with quantitative glucose metabolic rate (MR glc ) data extracted using regions of interest (ROIs) from the MRI-coregistered PET scans of all subjects. A standard rating evaluation of neocortical hypometabolism was also completed. Logistic regressions were used to determine and compare the diagnostic accuracy of the MTL and cortical ratings. For both MTL and cortical ratings, high intra- and inter-rater reliabilities were found (p values glc measures (p values <0.001). The combination of MTL and cortical ratings significantly improved the diagnostic accuracy over the cortical rating alone, with 100% of AD, 77% of MCI, and 85% of NL cases being correctly identified. This study shows that the visual rating of MTL hypometabolism on PET is reliable, yields a diagnostic accuracy equal to the quantitative ROI measures, and is clinically useful and more sensitive than cortical ratings for patients with MCI. We suggest this method be further evaluated for its potential in the early diagnosis of AD. (orig.)

  6. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    Science.gov (United States)

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. The second visual area in the marmoset monkey: visuotopic organisation, magnification factors, architectonical boundaries, and modularity.

    Science.gov (United States)

    Rosa, M G; Fritsches, K A; Elston, G N

    1997-11-03

    The organisation of the second visual area (V2) in marmoset monkeys was studied by means of extracellular recordings of responses to visual stimulation and examination of myelin- and cytochrome oxidase-stained sections. Area V2 forms a continuous cortical belt of variable width (1-2 mm adjacent to the foveal representation of V1, and 3-3.5 mm near the midline and on the tentorial surface) bordering V1 on the lateral, dorsal, medial, and tentorial surfaces of the occipital lobe. The total surface area of V2 is approximately 100 mm2, or about 50% of the surface area of V1 in the same individuals. In each hemisphere, the receptive fields of V2 neurones cover the entire contralateral visual hemifield, forming an ordered visuotopic representation. As in other simians, the dorsal and ventral halves of V2 represent the lower and upper contralateral quadrants, respectively, with little invasion of the ipsilateral hemifield. The representation of the vertical meridian forms the caudal border of V2, with V1, whereas a field discontinuity approximately coincident with the horizontal meridian forms the rostral border of V2, with other visually responsive areas. The bridge of cortex connecting dorsal and ventral V2 contains neurones with receptive fields centred within 1 degree of the centre of the fovea. The visuotopy, size, shape and location of V2 show little variation among individuals. Analysis of cortical magnification factor (CMF) revealed that the V2 map of the visual field is highly anisotropic: for any given eccentricity, the CMF is approximately twice as large in the dimension parallel to the V1/V2 border as it is perpendicular to this border. Moreover, comparison of V2 and V1 in the same individuals demonstrated that the representation of the central visual field is emphasised in V2, relative to V1. Approximately half of the surface area of V2 is dedicated to the representation of the central 5 degrees of the visual field. Calculations based on the CMF, receptive

  8. It's about time: revisiting temporal processing deficits in dyslexia.

    Science.gov (United States)

    Casini, Laurence; Pech-Georgel, Catherine; Ziegler, Johannes C

    2018-03-01

    Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non-temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the 'internal clock' of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia. © 2017 John Wiley & Sons Ltd.

  9. Visual processing of words in a patient with visual form agnosia: a behavioural and fMRI study.

    Science.gov (United States)

    Cavina-Pratesi, Cristiana; Large, Mary-Ellen; Milner, A David

    2015-03-01

    Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to "guess" remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area (FFA). We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Malaria infection has spatial, temporal, and spatiotemporal heterogeneity in unstable malaria transmission areas in northwest Ethiopia.

    Directory of Open Access Journals (Sweden)

    Kassahun Alemu

    Full Text Available BACKGROUND: Malaria elimination requires successful nationwide control efforts. Detecting the spatiotemporal distribution and mapping high-risk areas are useful to effectively target pockets of malaria endemic regions for interventions. OBJECTIVE: The aim of the study was to identify patterns of malaria distribution by space and time in unstable malaria transmission areas in northwest Ethiopia. METHODS: Data were retrieved from the monthly reports stored in the district malaria offices for the period between 2003 and 2012. Eighteen districts in the highland and fringe malaria areas were included and geo-coded for the purpose of this study. The spatial data were created in ArcGIS10 for each district. The Poisson model was used by applying Kulldorff methods using the SaTScan™ software to analyze the purely temporal, spatial and space-time clusters of malaria at a district levels. RESULTS: The study revealed that malaria case distribution has spatial, temporal, and spatiotemporal heterogeneity in unstable transmission areas. Most likely spatial malaria clusters were detected at Dera, Fogera, Farta, Libokemkem and Misrak Este districts (LLR =197764.1, p<0.001. Significant spatiotemporal malaria clusters were detected at Dera, Fogera, Farta, Libokemkem and Misrak Este districts (LLR=197764.1, p<0.001 between 2003/1/1 and 2012/12/31. A temporal scan statistics identified two high risk periods from 2009/1/1 to 2010/12/31 (LLR=72490.5, p<0.001 and from 2003/1/1 to 2005/12/31 (LLR=26988.7, p<0.001. CONCLUSION: In unstable malaria transmission areas, detecting and considering the spatiotemporal heterogeneity would be useful to strengthen malaria control efforts and ultimately achieve elimination.

  11. Cue competition affects temporal dynamics of edge-assignment in human visual cortex.

    Science.gov (United States)

    Brooks, Joseph L; Palmer, Stephen E

    2011-03-01

    Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.

  12. Task-specific impairments and enhancements induced by magnetic stimulation of human visual area V5.

    Science.gov (United States)

    Walsh, V; Ellison, A; Battelli, L; Cowey, A

    1998-03-22

    Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subjects were impaired in a motion but not a form 'pop-out' task when TMS was applied over V5. When motion was present, but irrelevant, or when attention to colour and form were required, TMS applied to V5 enhanced performance. When attention to motion was required in a motion-form conjunction search task, irrespective of whether the target was moving or stationary, TMS disrupted performance. These data suggest that attention to different visual attributes involves mutual inhibition between different extrastriate visual areas.

  13. Sequence Synopsis: Optimize Visual Summary of Temporal Event Data.

    Science.gov (United States)

    Chen, Yuanzhe; Xu, Panpan; Ren, Liu

    2018-01-01

    Event sequences analysis plays an important role in many application domains such as customer behavior analysis, electronic health record analysis and vehicle fault diagnosis. Real-world event sequence data is often noisy and complex with high event cardinality, making it a challenging task to construct concise yet comprehensive overviews for such data. In this paper, we propose a novel visualization technique based on the minimum description length (MDL) principle to construct a coarse-level overview of event sequence data while balancing the information loss in it. The method addresses a fundamental trade-off in visualization design: reducing visual clutter vs. increasing the information content in a visualization. The method enables simultaneous sequence clustering and pattern extraction and is highly tolerant to noises such as missing or additional events in the data. Based on this approach we propose a visual analytics framework with multiple levels-of-detail to facilitate interactive data exploration. We demonstrate the usability and effectiveness of our approach through case studies with two real-world datasets. One dataset showcases a new application domain for event sequence visualization, i.e., fault development path analysis in vehicles for predictive maintenance. We also discuss the strengths and limitations of the proposed method based on user feedback.

  14. A neural model of motion processing and visual navigation by cortical area MST.

    Science.gov (United States)

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  15. A new method for measuring temporal resolution in electrocardiogram-gated reconstruction image with area-detector computed tomography

    International Nuclear Information System (INIS)

    Kaneko, Takeshi; Takagi, Masachika; Kato, Ryohei; Anno, Hirofumi; Kobayashi, Masanao; Yoshimi, Satoshi; Sanda, Yoshihiro; Katada, Kazuhiro

    2012-01-01

    The purpose of this study was to design and construct a phantom for using motion artifact in the electrocardiogram (ECG)-gated reconstruction image. In addition, the temporal resolution under various conditions was estimated. A stepping motor was used to move the phantom over an arc in a reciprocating manner. The program for controlling the stepping motor permitted the stationary period and the heart rate to be adjusted as desired. Images of the phantom were obtained using a 320-row area-detector computed tomography (ADCT) system under various conditions using the ECG-gated reconstruction method. For estimation, the reconstruction phase was continuously changed and the motion artifacts were quantitatively assessed. The temporal resolution was calculated from the number of motion-free images. Changes in the temporal resolution according to heart rate, rotation time, the number of reconstruction segments and acquisition position in z-axis were also investigated. The measured temporal resolution of ECG-gated half reconstruction is 180 ms, which is in good agreement with the nominal temporal resolution of 175 ms. The measured temporal resolution of ECG-gated segmental reconstruction is in good agreement with the nominal temporal resolution in most cases. The estimated temporal resolution improved to approach the nominal temporal resolution as the number of reconstruction segments was increased. Temporal resolution in changing acquisition position is equal. This study shows that we could design a new phantom for estimating temporal resolution. (author)

  16. A Residential Area Extraction Method for High Resolution Remote Sensing Imagery by Using Visual Saliency and Perceptual Organization

    Directory of Open Access Journals (Sweden)

    CHEN Yixiang

    2017-12-01

    Full Text Available Inspired by human visual cognitive mechanism,a method of residential area extraction from high-resolution remote sensing images was proposed based on visual saliency and perceptual organization. Firstly,the data field theory of cognitive physics was introduced to model the visual saliency and the candidate residential areas were produced by adaptive thresholding. Then,the exact residential areas were obtained and refined by perceptual organization based on the high-frequency features of multi-scale wavelet transform. Finally,the validity of the proposed method was verified by experiments conducted on ZY-3 and Quickbird image data sets.

  17. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  18. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    International Nuclear Information System (INIS)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Pallier, C.; Oppenheim, C.; Rizzi, L.; Dehaene, S.

    2009-01-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  19. Attentional episodes in visual perception

    NARCIS (Netherlands)

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks

  20. Utilizing Structure-from-Motion Photogrammetry with Airborne Visual and Thermal Images to Monitor Thermal Areas in Yellowstone National Park

    Science.gov (United States)

    Carr, B. B.; Vaughan, R. G.

    2017-12-01

    The thermal areas in Yellowstone National Park (Wyoming, USA) are constantly changing. Persistent monitoring of these areas is necessary to better understand the behavior and potential hazards of both the thermal features and the deeper hydrothermal system driving the observed surface activity. As part of the Park's monitoring program, thousands of visual and thermal infrared (TIR) images have been acquired from a variety of airborne platforms over the past decade. We have used structure-from-motion (SfM) photogrammetry techniques to generate a variety of data products from these images, including orthomosaics, temperature maps, and digital elevation models (DEMs). Temperature maps were generated for Upper Geyser Basin and Norris Geyser Basin for the years 2009-2015, by applying SfM to nighttime TIR images collected from an aircraft-mounted forward-looking infrared (FLIR) camera. Temperature data were preserved through the SfM processing by applying a uniform linear stretch over the entire image set to convert between temperature and a 16-bit digital number. Mosaicked temperature maps were compared to the original FLIR image frames and to ground-based temperature data to constrain the accuracy of the method. Due to pixel averaging and resampling, among other issues, the derived temperature values are typically within 5-10 ° of the values of the un-resampled image frame. We also created sub-meter resolution DEMs from airborne daytime visual images of individual thermal areas. These DEMs can be used for resource and hazard management, and in cases where multiple DEMs exist from different times, for measuring topographic change, including change due to thermal activity. For example, we examined the sensitivity of the DEMs to topographic change by comparing DEMs of the travertine terraces at Mammoth Hot Springs, which can grow at > 1 m per year. These methods are generally applicable to images from airborne platforms, including planes, helicopters, and unmanned aerial

  1. FACILITATING INTEGRATED SPATIO-TEMPORAL VISUALIZATION AND ANALYSIS OF HETEROGENEOUS ARCHAEOLOGICAL AND PALAEOENVIRONMENTAL RESEARCH DATA

    Directory of Open Access Journals (Sweden)

    C. Willmes

    2012-07-01

    Full Text Available In the context of the Collaborative Research Centre 806 "Our way to Europe" (CRC806, a research database is developed for integrating data from the disciplines of archaeology, the geosciences and the cultural sciences to facilitate integrated access to heterogeneous data sources. A practice-oriented data integration concept and its implementation is presented in this contribution. The data integration approach is based on the application of Semantic Web Technology and is applied to the domains of archaeological and palaeoenvironmental data. The aim is to provide integrated spatio-temporal access to an existing wealth of data to facilitate research on the integrated data basis. For the web portal of the CRC806 research database (CRC806-Database, a number of interfaces and applications have been evaluated, developed and implemented for exposing the data to interactive analysis and visualizations.

  2. Attentional Capture by Salient Distractors during Visual Search Is Determined by Temporal Task Demands

    DEFF Research Database (Denmark)

    Kiss, Monika; Grubert, Anna; Petersen, Anders

    2012-01-01

    The question whether attentional capture by salient but taskirrelevant visual stimuli is triggered in a bottom–up fashion or depends on top–down task settings is still unresolved. Strong support for bottom–up capture was obtained in the additional singleton task, in which search arrays were visible...... until response onset. Equally strong evidence for top–down control of attentional capture was obtained in spatial cueing experiments in which display durations were very brief. To demonstrate the critical role of temporal task demands on salience-driven attentional capture, we measured ERP indicators...... component that was followed by a late Pd component, suggesting that they triggered attentional capture, which was later replaced by location-specific inhibition. When search arrays were visible for only 200 msec, the distractor-elicited N2pc was eliminated and was replaced by a Pd component in the same time...

  3. Eye position effects on the remapped memory trace of visual motion in cortical area MST.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2016-02-23

    After a saccade, most MST neurons respond to moving visual stimuli that had existed in their post-saccadic receptive fields and turned off before the saccade ("trans-saccadic memory remapping"). Neuronal responses in higher visual processing areas are known to be modulated in relation to gaze angle to represent image location in spatiotopic coordinates. In the present study, we investigated the eye position effects after saccades and found that the gaze angle modulated the visual sensitivity of MST neurons after saccades both to the actually existing visual stimuli and to the visual memory traces remapped by the saccades. We suggest that two mechanisms, trans-saccadic memory remapping and gaze modulation, work cooperatively in individual MST neurons to represent a continuous visual world.

  4. Spatio-temporal evolution of forest fires in Portugal

    Science.gov (United States)

    Tonini, Marj; Pereira, Mário G.; Parente, Joana

    2017-04-01

    A key issue in fire management is the ability to explore and try to predict where and when fires are more likely to occur. This information can be useful to understand the triggering factors of ignitions and for planning strategies to reduce forest fires, to manage the sources of ignition and to identify areas and frame period at risk. Therefore, producing maps displaying forest fires location and their occurrence in time can be of great help for accurately forecasting these hazardous events. In a fire prone country as Portugal, where thousands of events occurs each year, it is involved to drive information about fires over densities and recurrences just by looking at the original arrangement of the mapped ignition points or burnt areas. In this respect, statistical methods originally developed for spatio-temporal stochastic point processes can be employed to find a structure within these large datasets. In the present study, the authors propose an approach to analyze and visualize the evolution in space and in time of forest fires occurred in Portugal during a long frame period (1990 - 2013). Data came from the Portuguese mapped burnt areas official geodatabase (by the Institute for the Conservation of Nature and Forests), which is the result of interpreted satellite measurements. The following statistical analyses were performed: the geographically-weighted summary statistics, to analyze the local variability of the average burned area; the space-time Kernel density, to elaborate smoothed density surfaces representing over densities of fires classed by size and on North vs South region. Finally, we emploied the volume rendering thecnique to visualize the spatio-temporal evolution of these events into a unique map: this representation allows visually inspecting areas and time-step more affected from a high aggregation of forest fires. It results that during the whole investigated period over densities are mainly located in the northern regions, while in the

  5. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  6. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    Science.gov (United States)

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Dynamic adjustments in prefrontal, hippocampal, and inferior temporal interactions with increasing visual working memory load.

    Science.gov (United States)

    Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark

    2008-07-01

    The maintenance of visual stimuli across a delay interval in working memory tasks is thought to involve reverberant neural communication between the prefrontal cortex and posterior visual association areas. Recent studies suggest that the hippocampus might also contribute to this retention process, presumably via reciprocal interactions with visual regions. To characterize the nature of these interactions, we performed functional connectivity analysis on an event-related functional magnetic resonance imaging data set in which participants performed a delayed face recognition task. As the number of faces that participants were required to remember was parametrically increased, the right inferior frontal gyrus (IFG) showed a linearly decreasing degree of functional connectivity with the fusiform face area (FFA) during the delay period. In contrast, the hippocampus linearly increased its delay period connectivity with both the FFA and the IFG as the mnemonic load increased. Moreover, the degree to which participants' FFA showed a load-dependent increase in its connectivity with the hippocampus predicted the degree to which its connectivity with the IFG decreased with load. Thus, these neural circuits may dynamically trade off to accommodate the particular mnemonic demands of the task, with IFG-FFA interactions mediating maintenance at lower loads and hippocampal interactions supporting retention at higher loads.

  8. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  9. Purely temporal figure-ground segregation.

    Science.gov (United States)

    Kandil, F I; Fahle, M

    2001-05-01

    Visual figure-ground segregation is achieved by exploiting differences in features such as luminance, colour, motion or presentation time between a figure and its surround. Here we determine the shortest delay times required for figure-ground segregation based on purely temporal features. Previous studies usually employed stimulus onset asynchronies between figure- and ground-containing possible artefacts based on apparent motion cues or on luminance differences. Our stimuli systematically avoid these artefacts by constantly showing 20 x 20 'colons' that flip by 90 degrees around their midpoints at constant time intervals. Colons constituting the background flip in-phase whereas those constituting the target flip with a phase delay. We tested the impact of frequency modulation and phase reduction on target detection. Younger subjects performed well above chance even at temporal delays as short as 13 ms, whilst older subjects required up to three times longer delays in some conditions. Figure-ground segregation can rely on purely temporal delays down to around 10 ms even in the absence of luminance and motion artefacts, indicating a temporal precision of cortical information processing almost an order of magnitude lower than the one required for some models of feature binding in the visual cortex [e.g. Singer, W. (1999), Curr. Opin. Neurobiol., 9, 189-194]. Hence, in our experiment, observers are unable to use temporal stimulus features with the precision required for these models.

  10. Visual interhemispheric communication and callosal connections of the occipital lobes.

    Science.gov (United States)

    Berlucchi, Giovanni

    2014-07-01

    Callosal connections of the occipital lobes, coursing in the splenium of the corpus callosum, have long been thought to be crucial for interactions between the cerebral hemispheres in vision in both experimental animals and humans. Yet the callosal connections of the temporal and parietal lobes appear to have more important roles than those of the occipital callosal connections in at least some high-order interhemispheric visual functions. The partial intermixing and overlap of temporal, parietal and occipital callosal connections within the splenium has made it difficult to attribute the effects of splenial pathological lesions or experimental sections to splenial components specifically related to select cortical areas. The present review describes some current contributions from the modern techniques for the tracking of commissural fibers within the living human brain to the tentative assignation of specific visual functions to specific callosal tracts, either occipital or extraoccipital. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Endogenous sequential cortical activity evoked by visual stimuli.

    Science.gov (United States)

    Carrillo-Reid, Luis; Miller, Jae-Eun Kang; Hamm, Jordan P; Jackson, Jesse; Yuste, Rafael

    2015-06-10

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. Copyright © 2015 Carrillo-Reid et al.

  12. Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.

    Science.gov (United States)

    Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M

    2017-01-25

    Interocular decorrelation of input signals in developing visual cortex can cause impaired binocular vision and amblyopia. Although increased intrinsic noise is thought to be responsible for a range of perceptual deficits in amblyopic humans, the neural basis for the elevated perceptual noise in amblyopic primates is not known. Here, we tested the idea that perceptual noise is linked to the neuronal spiking noise (variability) resulting from developmental alterations in cortical circuitry. To assess spiking noise, we analyzed the contrast-dependent dynamics of spike counts and spiking irregularity by calculating the square of the coefficient of variation in interspike intervals (CV 2 ) and the trial-to-trial fluctuations in spiking, or mean matched Fano factor (m-FF) in visual area V2 of monkeys reared with chronic monocular defocus. In amblyopic neurons, the contrast versus response functions and the spike count dynamics exhibited significant deviations from comparable data for normal monkeys. The CV 2 was pronounced in amblyopic neurons for high-contrast stimuli and the m-FF was abnormally high in amblyopic neurons for low-contrast gratings. The spike count, CV 2 , and m-FF of spontaneous activity were also elevated in amblyopic neurons. These contrast-dependent spiking irregularities were correlated with the level of binocular suppression in these V2 neurons and with the severity of perceptual loss for individual monkeys. Our results suggest that the developmental alterations in normalization mechanisms resulting from early binocular suppression can explain much of these contrast-dependent spiking abnormalities in V2 neurons and the perceptual performance of our amblyopic monkeys. Amblyopia is a common developmental vision disorder in humans. Despite the extensive animal studies on how amblyopia emerges, we know surprisingly little about the neural basis of amblyopia in humans and nonhuman primates. Although the vision of amblyopic humans is often described as

  13. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  14. Region-specificity of GABAA receptor mediated effects on orientation and direction selectivity in cat visual cortical area 18.

    Science.gov (United States)

    Jirmann, Kay-Uwe; Pernberg, Joachim; Eysel, Ulf T

    2009-01-01

    The role of GABAergic inhibition in orientation and direction selectivity has been investigated with the GABA(A)-Blocker bicuculline in the cat visual cortex, and results indicated a region specific difference of functional contributions of GABAergic inhibition in areas 17 and 18. In area 17 inhibition appeared mainly involved in sculpturing orientation and direction tuning, while in area 18 inhibition seemed more closely associated with temporal receptive field properties. However, different types of stimuli were used to test areas 17 and 18 and further studies performed in area 17 suggested an important influence of the stimulus type (single light bars vs. moving gratings) on the evoked responses (transient vs. sustained) and inhibitory mechanisms (GABA(A) vs. GABA(B)) which in turn might be more decisive for the specific results than the cortical region. To insert the missing link in this chain of arguments it was necessary to study GABAergic inhibition in area 18 with moving light bars, which has not been done so far. Therefore, in the present study we investigated area 18 cells responding to oriented moving light bars with extracellular recordings and reversible microiontophoretic blockade of GABAergig inhibition with bicuculline methiodide. The majority of neurons was characterized by a pronounced orientation specificity and variable degrees of direction selectivity. GABA(A)ergic inhibition significantly influenced preferred orientation and preferred direction in area 18. During the action of bicuculline orientation tuning width increased and orientation and direction selectivity indices decreased. Our results obtained in area 18 with moving bar stimuli, although in the proportion of affected cells similar to those described in area 17, quantitatively matched the findings for direction and orientation specificity obtained with moving gratings in area 18. Accordingly, stimulus type is not decisive in area 18 and the GABA(A) dependent, inhibitory intracortical

  15. On the visualization of water-related big data: extracting insights from drought proxies' datasets

    Science.gov (United States)

    Diaz, Vitali; Corzo, Gerald; van Lanen, Henny A. J.; Solomatine, Dimitri

    2017-04-01

    Big data is a growing area of science where hydroinformatics can benefit largely. There have been a number of important developments in the area of data science aimed at analysis of large datasets. Such datasets related to water include measurements, simulations, reanalysis, scenario analyses and proxies. By convention, information contained in these databases is referred to a specific time and a space (i.e., longitude/latitude). This work is motivated by the need to extract insights from large water-related datasets, i.e., transforming large amounts of data into useful information that helps to better understand of water-related phenomena, particularly about drought. In this context, data visualization, part of data science, involves techniques to create and to communicate data by encoding it as visual graphical objects. They may help to better understand data and detect trends. Base on existing methods of data analysis and visualization, this work aims to develop tools for visualizing water-related large datasets. These tools were developed taking advantage of existing libraries for data visualization into a group of graphs which include both polar area diagrams (PADs) and radar charts (RDs). In both graphs, time steps are represented by the polar angles and the percentages of area in drought by the radios. For illustration, three large datasets of drought proxies are chosen to identify trends, prone areas and spatio-temporal variability of drought in a set of case studies. The datasets are (1) SPI-TS2p1 (1901-2002, 11.7 GB), (2) SPI-PRECL0p5 (1948-2016, 7.91 GB) and (3) SPEI-baseV2.3 (1901-2013, 15.3 GB). All of them are on a monthly basis and with a spatial resolution of 0.5 degrees. First two were retrieved from the repository of the International Research Institute for Climate and Society (IRI). They are included into the Analyses Standardized Precipitation Index (SPI) project (iridl.ldeo.columbia.edu/SOURCES/.IRI/.Analyses/.SPI/). The third dataset was

  16. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    Science.gov (United States)

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  17. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Directory of Open Access Journals (Sweden)

    Nazli eEmadi

    2014-11-01

    Full Text Available Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (< 8 Hz oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance.

  18. Binding ‘when’ and ‘where’ impairs temporal, but not spatial recall in auditory and visual working memory

    Directory of Open Access Journals (Sweden)

    Franco eDelogu

    2012-03-01

    Full Text Available Information about where and when events happened seem naturally linked to each other, but only few studies have investigated whether and how they are associated in working memory. We tested whether the location of items and their temporal order are jointly or independently encoded. We also verified if spatio-temporal binding is influenced by the sensory modality of items. Participants were requested to memorize the location and/or the serial order of five items (environmental sounds or pictures sequentially presented from five different locations. Next, they were asked to recall either the item location or their order of presentation within the sequence. Attention during encoding was manipulated by contrasting blocks of trials in which participants were requested to encode only one feature, to blocks of trials where they had to encode both features. Results show an interesting interaction between task and attention. Accuracy in the serial order recall was affected by the simultaneous encoding of item location, whereas the recall of item location was unaffected by the concurrent encoding of the serial order of items. This asymmetric influence of attention on the two tasks was similar for the auditory and visual modality. Together, these data indicate that item location is processed in a relatively automatic fashion, whereas maintaining serial order is more demanding in terms of attention. The remarkably analogous results for auditory and visual memory performance, suggest that the binding of serial order and location in working memory is not modality-dependent, and may involve common intersensory mechanisms.

  19. A comparative perspective on the human temporal lobe

    NARCIS (Netherlands)

    Bryant, K.L.; Preuss, T.M.; Bruner, E.; Ogihara, N.; Tanabe, H.

    2018-01-01

    The temporal lobe is a morphological specialization of primates resulting from an expansion of higher-order visual cortex that is a hallmark of the primate brain. Among primates, humans possess a temporal lobe that has significantly expanded. Several uniquely human cognitive abilities, including

  20. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  1. Emotion processing in the visual brain: a MEG analysis.

    Science.gov (United States)

    Peyk, Peter; Schupp, Harald T; Elbert, Thomas; Junghöfer, Markus

    2008-06-01

    Recent functional magnetic resonance imaging (fMRI) and event-related brain potential (ERP) studies provide empirical support for the notion that emotional cues guide selective attention. Extending this line of research, whole head magneto-encephalogram (MEG) was measured while participants viewed in separate experimental blocks a continuous stream of either pleasant and neutral or unpleasant and neutral pictures, presented for 330 ms each. Event-related magnetic fields (ERF) were analyzed after intersubject sensor coregistration, complemented by minimum norm estimates (MNE) to explore neural generator sources. Both streams of analysis converge by demonstrating the selective emotion processing in an early (120-170 ms) and a late time interval (220-310 ms). ERF analysis revealed that the polarity of the emotion difference fields was reversed across early and late intervals suggesting distinct patterns of activation in the visual processing stream. Source analysis revealed the amplified processing of emotional pictures in visual processing areas with more pronounced occipito-parieto-temporal activation in the early time interval, and a stronger engagement of more anterior, temporal, regions in the later interval. Confirming previous ERP studies showing facilitated emotion processing, the present data suggest that MEG provides a complementary look at the spread of activation in the visual processing stream.

  2. The neural correlates of visual imagery: A co-ordinate-based meta-analysis.

    Science.gov (United States)

    Winlove, Crawford I P; Milton, Fraser; Ranson, Jake; Fulford, Jon; MacKisack, Matthew; Macpherson, Fiona; Zeman, Adam

    2018-01-02

    Visual imagery is a form of sensory imagination, involving subjective experiences typically described as similar to perception, but which occur in the absence of corresponding external stimuli. We used the Activation Likelihood Estimation algorithm (ALE) to identify regions consistently activated by visual imagery across 40 neuroimaging studies, the first such meta-analysis. We also employed a recently developed multi-modal parcellation of the human brain to attribute stereotactic co-ordinates to one of 180 anatomical regions, the first time this approach has been combined with the ALE algorithm. We identified a total 634 foci, based on measurements from 464 participants. Our overall comparison identified activation in the superior parietal lobule, particularly in the left hemisphere, consistent with the proposed 'top-down' role for this brain region in imagery. Inferior premotor areas and the inferior frontal sulcus were reliably activated, a finding consistent with the prominent semantic demands made by many visual imagery tasks. We observed bilateral activation in several areas associated with the integration of eye movements and visual information, including the supplementary and cingulate eye fields (SCEFs) and the frontal eye fields (FEFs), suggesting that enactive processes are important in visual imagery. V1 was typically activated during visual imagery, even when participants have their eyes closed, consistent with influential depictive theories of visual imagery. Temporal lobe activation was restricted to area PH and regions of the fusiform gyrus, adjacent to the fusiform face complex (FFC). These results provide a secure foundation for future work to characterise in greater detail the functional contributions of specific areas to visual imagery. Copyright © 2017. Published by Elsevier Ltd.

  3. Preserved local but disrupted contextual figure-ground influences in an individual with abnormal function of intermediate visual areas

    Science.gov (United States)

    Brooks, Joseph L.; Gilaie-Dotan, Sharon; Rees, Geraint; Bentin, Shlomo; Driver, Jon

    2012-01-01

    Visual perception depends not only on local stimulus features but also on their relationship to the surrounding stimulus context, as evident in both local and contextual influences on figure-ground segmentation. Intermediate visual areas may play a role in such contextual influences, as we tested here by examining LG, a rare case of developmental visual agnosia. LG has no evident abnormality of brain structure and functional neuroimaging showed relatively normal V1 function, but his intermediate visual areas (V2/V3) function abnormally. We found that contextual influences on figure-ground organization were selectively disrupted in LG, while local sources of figure-ground influences were preserved. Effects of object knowledge and familiarity on figure-ground organization were also significantly diminished. Our results suggest that the mechanisms mediating contextual and familiarity influences on figure-ground organization are dissociable from those mediating local influences on figure-ground assignment. The disruption of contextual processing in intermediate visual areas may play a role in the substantial object recognition difficulties experienced by LG. PMID:22947116

  4. Medial temporal lobe

    International Nuclear Information System (INIS)

    Silver, A.J.; Cross, D.T.; Friedman, D.P.; Bello, J.A.; Hilal, S.K.

    1989-01-01

    To better define the MR appearance of hippocampal sclerosis, the authors have reviewed over 500 MR coronal images of the temporal lobes. Many cysts were noted that analysis showed were of choroid-fissure (arachnoid) origin. Their association with seizures was low. A few nontumorous, static, medial temporal lesions, noted on T2-weighted coronal images, were poorly visualized on T1-weighted images and did not enhance with gadolinium. The margins were irregular, involved the hippocampus, and were often associated with focal atrophy. The lesions usually were associated with seizure disorders and specific electroencephalographic changes, and the authors believe they represented hippocampal sclerosis

  5. Data visualization of temporal ozone pollution between urban and ...

    African Journals Online (AJOL)

    ... this study was conducted with the aim to assess and visualize the occurrence of potential Ozone pollution severity of two chosen locations in Selangor, Malaysia: Shah Alam (urban) and Banting (sub-urban). Data visualization analytics were employed using Ozone exceedances and Principal Component Analysis (PCA).

  6. Prolonged fasting impairs neural reactivity to visual stimulation.

    Science.gov (United States)

    Kohn, N; Wassenberg, A; Toygar, T; Kellermann, T; Weidenfeld, C; Berthold-Losleben, M; Chechko, N; Orfanos, S; Vocke, S; Laoutidis, Z G; Schneider, F; Karges, W; Habel, U

    2016-01-01

    Previous literature has shown that hypoglycemia influences the intensity of the BOLD signal. A similar but smaller effect may also be elicited by low normal blood glucose levels in healthy individuals. This may not only confound the BOLD signal measured in fMRI, but also more generally interact with cognitive processing, and thus indirectly influence fMRI results. Here we show in a placebo-controlled, crossover, double-blind study on 40 healthy subjects, that overnight fasting and low normal levels of glucose contrasted to an activated, elevated glucose condition have an impact on brain activation during basal visual stimulation. Additionally, functional connectivity of the visual cortex shows a strengthened association with higher-order attention-related brain areas in an elevated blood glucose condition compared to the fasting condition. In a fasting state visual brain areas show stronger coupling to the inferior temporal gyrus. Results demonstrate that prolonged overnight fasting leads to a diminished BOLD signal in higher-order occipital processing areas when compared to an elevated blood glucose condition. Additionally, functional connectivity patterns underscore the modulatory influence of fasting on visual brain networks. Patterns of brain activation and functional connectivity associated with a broad range of attentional processes are affected by maturation and aging and associated with psychiatric disease and intoxication. Thus, we conclude that prolonged fasting may decrease fMRI design sensitivity in any task involving attentional processes when fasting status or blood glucose is not controlled.

  7. On the Functional Neuroanatomy of Visual Word Processing: Effects of Case and Letter Deviance

    Science.gov (United States)

    Kronbichler, Martin; Klackl, Johannes; Richlan, Fabio; Schurz, Matthias; Staffen, Wolfgang; Ladurner, Gunther; Wimmer, Heinz

    2009-01-01

    This functional magnetic resonance imaging study contrasted case-deviant and letter-deviant forms with familiar forms of the same phonological words (e.g., "TaXi" and "Taksi" vs. "Taxi") and found that both types of deviance led to increased activation in a left occipito-temporal region, corresponding to the visual word form area (VWFA). The…

  8. The associations between multisensory temporal processing and symptoms of schizophrenia.

    Science.gov (United States)

    Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T

    2017-01-01

    Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Postdictive modulation of visual orientation.

    Science.gov (United States)

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  10. A Framework for visualization of criminal networks

    DEFF Research Database (Denmark)

    Rasheed, Amer

    networks, network analysis, composites, temporal data visualization, clustering and hierarchical clustering of data but there are a number of areas which are overlooked by the researchers. Moreover there are some issues, for instance, lack of effective filtering techniques, computational overhead......This Ph.D. thesis describes research concerning the application of criminal network visualization in the field of investigative analysis. There are number of way with which the investigative analysis can locate the hidden motive behind any criminal activity. Firstly, the investigative analyst must...... have the ability to understand the criminal plot since a comprehensive plot is a pre-requisite to conduct an organized crime. Secondly, the investigator should understand the organization and structure of criminal network. The knowledge about these two aspects is vital in conducting an investigative...

  11. Prevalence of increases in functional connectivity in visual, somatosensory and language areas in congenital blindness

    DEFF Research Database (Denmark)

    Heine, Lizette; Bahri, Mohamed A; Cavaliere, Carlo

    2015-01-01

    stronger functional connectivity in blind participants between the visual ROIs and areas implicated in language and tactile (Braille) processing such as the inferior frontal gyrus (Broca's area), thalamus, supramarginal gyrus and cerebellum. The observed group differences underscore the extent of the cross...

  12. Visualization of the tire-soil interaction area by means of ObjectARX programming interface

    Science.gov (United States)

    Mueller, W.; Gruszczyński, M.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.

    2014-04-01

    The process of data visualization, important for their analysis, becomes problematic when large data sets generated via computer simulations are available. This problem concerns, among others, the models that describe the geometry of tire-soil interaction. For the purpose of a graphical representation of this area and implementation of various geometric calculations the authors have developed a plug-in application for AutoCAD, based on the latest technologies, including ObjectARX, LINQ and the use of Visual Studio platform. Selected programming tools offer a wide variety of IT structures that enable data visualization and data analysis and are important e.g. in model verification.

  13. Temporal bone trauma and imaging

    International Nuclear Information System (INIS)

    Turetschek, K.; Czerny, C.; Wunderbaldinger, P.; Steiner, E.

    1997-01-01

    Fractures of the temporal bone result from direct trauma to the temporal bone or occur as one component of a severe craniocerebral injury. Complications of temporal trauma are hemotympanon, facial nerve paralysis, conductive or sensorineur hearing loss, and leakage of cerebrospinal fluid. Erly recognition and an appropiate therapy may improve or prevent permanent deficits related to such complications. Only 20-30% of temporal bone fractures can be visualized by plain films. CT has displaced plain radiography in the investigation of the otological trauma because subtle bony details are best evaluated by CT which even can be reformatted in multiple projections, regardless of the original plane of scanning. Associated epidural, subdural, and intracerebral hemorrhagic lesions are better defined by MRI. (orig.) [de

  14. fMRI neurofeedback of higher visual areas and perceptual biases.

    Science.gov (United States)

    Habes, I; Rushton, S; Johnston, S J; Sokunbi, M O; Barawi, K; Brosnan, M; Daly, T; Ihssen, N; Linden, D E J

    2016-05-01

    The self-regulation of brain activation via neurofeedback training offers a method to study the relationship between brain areas and perception in a more direct manner than the conventional mapping of brain responses to different types of stimuli. The current proof-of-concept study aimed to demonstrate that healthy volunteers can self-regulate activity in the parahippocampal place area (PPA) over the fusiform face area (FFA). Both areas are involved in higher order visual processing and are activated during the imagery of scenes and faces respectively. Participants (N=9) were required to upregulate PPA relative to FFA activity, and all succeeded at the task, with imagery of scenes being the most commonly reported mental strategy. A control group (N=8) underwent the same imagery and testing procedure, albeit without neurofeedback, in a mock MR scanner to account for any non-specific training effects. The upregulation of PPA activity occurred concurrently with activation of prefrontal and parietal areas, which have been associated with ideation and mental image generation. We tested whether successful upregulation of the PPA relative to FFA had consequences on perception by assessing bistable perception of faces and houses in a binocular rivalry task (before and after the scanning sessions) and categorisation of faces and scenes presented in transparent composite images (during scanning, interleaved with the self-regulation blocks). Contrary to our expectations, upregulation of the PPA did not alter the duration of face or house perception in the rivalry task and response speed and accuracy in the categorisation task. This conclusion was supported by the results of another control experiment (N=10 healthy participants) that involved intensive exposure to category-specific stimuli and did not show any behavioural or perceptual changes. We conclude that differential self-regulation of higher visual areas can be achieved, but that perceptual biases under conditions of

  15. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    Science.gov (United States)

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  17. Visualization and assessment of spatio-temporal covariance properties

    KAUST Repository

    Huang, Huang; Sun, Ying

    2017-01-01

    approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use

  18. Effects of Temporal and Interspecific Variation of Specific Leaf Area on Leaf Area Index Estimation of Temperate Broadleaved Forests in Korea

    Directory of Open Access Journals (Sweden)

    Boram Kwon

    2016-09-01

    Full Text Available This study investigated the effects of interspecific and temporal variation of specific leaf area (SLA, cm2·g−1 on leaf area index (LAI estimation for three deciduous broadleaved forests (Gwangneung (GN, Taehwa (TH, and Gariwang (GRW in Korea with varying ages and composition of tree species. In fall of 2014, fallen leaves were periodically collected using litter traps and classified by species. LAI was estimated by obtaining SLAs using four calculation methods (A: including both interspecific and temporal variation in SLA; B: species specific mean SLA; C: period-specific mean SLA; and D: overall mean, then multiplying the SLAs by the amount of leaves. SLA varied across different species in all plots, and SLAs of upper canopy species were less than those of lower canopy species. The LAIs calculated using method A, the reference method, were GN 6.09, TH 5.42, and GRW 4.33. LAIs calculated using method B showed a difference of up to 3% from the LAI of method A, but LAIs calculated using methods C and D were overestimated. Therefore, species specific SLA must be considered for precise LAI estimation for broadleaved forests that include multiple species.

  19. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    Science.gov (United States)

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  20. Dissociation of object and spatial visual processing pathways in human extrastriate cortex

    Energy Technology Data Exchange (ETDEWEB)

    Haxby, J.V.; Grady, C.L.; Horwitz, B.; Ungerleider, L.G.; Mishkin, M.; Carson, R.E.; Herscovitch, P.; Schapiro, M.B.; Rapoport, S.I. (National Institutes of Health, Bethesda, MD (USA))

    1991-03-01

    The existence and neuroanatomical locations of separate extrastriate visual pathways for object recognition and spatial localization were investigated in healthy young men. Regional cerebral blood flow was measured by positron emission tomography and bolus injections of H2(15)O, while subjects performed face matching, dot-location matching, or sensorimotor control tasks. Both visual matching tasks activated lateral occipital cortex. Face discrimination alone activated a region of occipitotemporal cortex that was anterior and inferior to the occipital area activated by both tasks. The spatial location task alone activated a region of lateral superior parietal cortex. Perisylvian and anterior temporal cortices were not activated by either task. These results demonstrate the existence of three functionally dissociable regions of human visual extrastriate cortex. The ventral and dorsal locations of the regions specialized for object recognition and spatial localization, respectively, suggest some homology between human and nonhuman primate extrastriate cortex, with displacement in human brain, possibly related to the evolution of phylogenetically newer cortical areas.

  1. Universities’ visual image and Internet communication

    Directory of Open Access Journals (Sweden)

    Okushova Gulnafist

    2016-01-01

    Full Text Available Universities of the 21st century are built on digital walls and on the Internet foundation. Their “real virtuality” of M. Castells is represented by information and communication flows that reflect various areas: education, research, culture, leisure, and others. The visual image of a university is the bridge that connects its physical and digital reality and identifies it within the information flow on the Internet. Visual image identification on the Internet and the function that the visual image performs as an electronic communication tool lay the foundation for our research. The key focal point of a university’s visual image on the Internet is its official website. Our research shows that with the development of computer technology, the semantic heterogeneity of universities’ visual images has changed from Web 1.0 to Web 2.0. A university’s web portal both reflects and produces its digital life, which is broader and more informative than the physical life alone, as there are no temporal and spatial boundaries in electronic interactions. Polysemy and directed communication through university’s visual images are effective developments for both online and offline communication for the university. The Internet communication reach all spheres of university life and reflect its content. Visual images of universities, based on electronic communication tools, not only “open” them for digital natives and digital immigrants, but also create a cyberspace for scientific and educational discourse.

  2. Cortico-cortical connections of areas 44 and 45B in the macaque monkey.

    Science.gov (United States)

    Frey, Stephen; Mackey, Scott; Petrides, Michael

    2014-04-01

    In the human brain, areas 44 and 45 constitute Broca's region, the ventrolateral frontal region critical for language production. The homologues of these areas in the macaque monkey brain have been established by direct cytoarchitectonic comparison with the human brain. The cortical areas that project monosynaptically to areas 44 and 45B in the macaque monkey brain require clarification. Fluorescent retrograde tracers were placed in cytoarchitectonic areas 44 and 45B of the macaque monkey, as well as in the anterior part of the inferior parietal lobule and the superior temporal gyrus. The results demonstrate that ipsilateral afferent connections of area 44 arise from local frontal areas, including rostral premotor cortical area 6, from secondary somatosensory cortex, the caudal insula, and the cingulate motor region. Area 44 is strongly linked with the anterior inferior parietal lobule (particularly area PFG and the adjacent anterior intraparietal sulcus). Input from the temporal lobe is limited to the fundus of the superior temporal sulcus extending caudal to the central sulcus. There is also input from the sulcal part of area Tpt in the upper bank of the superior temporal sulcus. Area 45B shares some of the connections of area 44, but can be distinguished from area 44 by input from the caudal inferior parietal lobule (area PG) and significant input from the part of the superior temporal sulcus that extends anterior to the central sulcus. Area 45B also receives input from visual association cortex that is not observed in area 44. The results have provided a clarification of the relative connections of areas 44 and 45B of the ventrolateral frontal region which, in the human brain, subserves certain aspects of language processing. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.

    Science.gov (United States)

    Wiemers, Michael; Fischer, Martin H

    2016-01-01

    Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.

  4. Spatially uniform but temporally variable bacterioplankton in a semi-enclosed coastal area.

    Science.gov (United States)

    Meziti, Alexandra; Kormas, Konstantinos A; Moustaka-Gouni, Maria; Karayanni, Hera

    2015-07-01

    Studies focusing on the temporal and spatial dynamics of bacterioplankton communities within littoral areas undergoing direct influences from the coast are quite limited. In addition, they are more complicated to resolve compared to communities in the open ocean. In order to elucidate the effects of spatial vs. temporal variability on bacterial communities in a highly land-influenced semi-enclosed gulf, surface bacterioplankton communities from five coastal sites in Igoumenitsa Gulf (Ionian Sea, Greece) were analyzed over a nine-month period using 16S rDNA 454-pyrosequencing. Temporal differences were more pronounced than spatial ones, with lower diversity indices observed during the summer months. During winter and early spring, bacterial communities were dominated by SAR11 representatives, while this pattern changed in May when they were abruptly replaced by members of Flavobacteriales, Pseudomonadales, and Alteromonadales. Additionally, correlation analysis showed high negative correlations between the presence of SAR11 OTUs in relation to temperature and sunlight that might have driven, directly or indirectly, the disappearance of these OTUs in the summer months. The dominance of SAR11 during the winter months further supported the global distribution of the clade, not only in the open-sea, but also in coastal systems. This study revealed that specific bacteria exhibited distinct succession patterns in an anthropogenic-impacted coastal system. The major bacterioplankton component was represented by commonly found marine bacteria exhibiting seasonal dynamics, while freshwater and terrestrial-related phylotypes were absent. Copyright © 2015 Elsevier GmbH. All rights reserved.

  5. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas – a review

    OpenAIRE

    E. Cristiano; M.-C. ten Veldhuis; N. van de Giesen

    2017-01-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological res...

  6. Sensitivity to Temporal Reward Structure in Amygdala Neurons

    OpenAIRE

    Bermudez, Maria A.; Göbel, Carl; Schultz, Wolfram

    2012-01-01

    Summary The time of reward and the temporal structure of reward occurrence fundamentally influence behavioral reinforcement and decision processes [1–11]. However, despite knowledge about timing in sensory and motor systems [12–17], we know little about temporal mechanisms of neuronal reward processing. In this experiment, visual stimuli predicted different instantaneous probabilities of reward occurrence that resulted in specific temporal reward structures. Licking behavior demonstrated that...

  7. Which visual functions depend on intermediate visual regions? Insights from a case of developmental visual form agnosia.

    Science.gov (United States)

    Gilaie-Dotan, Sharon

    2016-03-01

    A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Functional organization of the face-sensitive areas in human occipital-temporal cortex.

    Science.gov (United States)

    Shao, Hanyu; Weng, Xuchu; He, Sheng

    2017-08-15

    Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high

  9. Spatio-Temporal Saliency Perception via Hypercomplex Frequency Spectral Contrast

    Directory of Open Access Journals (Sweden)

    Zhiqiang Tian

    2013-03-01

    Full Text Available Salient object perception is the process of sensing the salient information from the spatio-temporal visual scenes, which is a rapid pre-attention mechanism for the target location in a visual smart sensor. In recent decades, many successful models of visual saliency perception have been proposed to simulate the pre-attention behavior. Since most of the methods usually need some ad hoc parameters or high-cost preprocessing, they are difficult to rapidly detect salient object or be implemented by computing parallelism in a smart sensor. In this paper, we propose a novel spatio-temporal saliency perception method based on spatio-temporal hypercomplex spectral contrast (HSC. Firstly, the proposed HSC algorithm represent the features in the HSV (hue, saturation and value color space and features of motion by a hypercomplex number. Secondly, the spatio-temporal salient objects are efficiently detected by hypercomplex Fourier spectral contrast in parallel. Finally, our saliency perception model also incorporates with the non-uniform sampling, which is a common phenomenon of human vision that directs visual attention to the logarithmic center of the image/video in natural scenes. The experimental results on the public saliency perception datasets demonstrate the effectiveness of the proposed approach compared to eleven state-of-the-art approaches. In addition, we extend the proposed model to moving object extraction in dynamic scenes, and the proposed algorithm is superior to the traditional algorithms.

  10. MULTI-TEMPORAL ANALYSIS OF LANDSCAPES AND URBAN AREAS

    Directory of Open Access Journals (Sweden)

    E. Nocerino

    2012-07-01

    Full Text Available This article presents a 4D modelling approach that employs multi-temporal and historical aerial images to derive spatio-temporal information for scenes and landscapes. Such imagery represent a unique data source, which combined with photo interpretation and reality-based 3D reconstruction techniques, can offer a more complete modelling procedure because it adds the fourth dimension of time to 3D geometrical representation and thus, allows urban planners, historians, and others to identify, describe, and analyse changes in individual scenes and buildings as well as across landscapes. Particularly important to this approach are historical aerial photos, which provide data about the past that can be collected, processed, and then integrated as a database. The proposed methodology employs both historical (1945 and more recent (1973 and 2000s aerial images from the Trentino region in North-eastern Italy in order to create a multi-temporal database of information to assist researchers in many disciplines such as topographic mapping, geology, geography, architecture, and archaeology as they work to reconstruct building phases and to understand landscape transformations (Fig. 1.

  11. Different brain circuits underlie motor and perceptual representations of temporal intervals

    DEFF Research Database (Denmark)

    Bueti, Doemnica; Walsh, Vincent; Frith, Christopher

    2008-01-01

    V5/MT. Our findings point to a role for the parietal cortex as an interface between sensory and motor processes and suggest that it may be a key node in translation of temporal information into action. Furthermore, we discuss the potential importance of the extrastriate cortex in processing visual......In everyday life, temporal information is used for both perception and action, but whether these two functions reflect the operation of similar or different neural circuits is unclear. We used functional magnetic resonance imaging to investigate the neural correlates of processing temporal...... information when either a motor or a perceptual representation is used. Participants viewed two identical sequences of visual stimuli and used the information differently to perform either a temporal reproduction or a temporal estimation task. By comparing brain activity evoked by these tasks and control...

  12. Iconic memory and parietofrontal network: fMRI study using temporal integration.

    Science.gov (United States)

    Saneyoshi, Ayako; Niimi, Ryosuke; Suetsugu, Tomoko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko

    2011-08-03

    We investigated the neural basis of iconic memory using functional magnetic resonance imaging. The parietofrontal network of selective attention is reportedly relevant to readout from iconic memory. We adopted a temporal integration task that requires iconic memory but not selective attention. The results showed that the task activated the parietofrontal network, confirming that the network is involved in readout from iconic memory. We further tested a condition in which temporal integration was performed by visual short-term memory but not by iconic memory. However, no brain region revealed higher activation for temporal integration by iconic memory than for temporal integration by visual short-term memory. This result suggested that there is no localized brain region specialized for iconic memory per se.

  13. The role of human ventral visual cortex in motion perception

    Science.gov (United States)

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  14. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  15. [Symptoms and lesion localization in visual agnosia].

    Science.gov (United States)

    Suzuki, Kyoko

    2004-11-01

    There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.

  16. A lightning strike to the head causing a visual cortex defect with simple and complex visual hallucinations

    Science.gov (United States)

    Kleiter, Ingo; Luerding, Ralf; Diendorfer, Gerhard; Rek, Helga; Bogdahn, Ulrich; Schalke, Berthold

    2009-01-01

    The case of a 23-year-old mountaineer who was hit by a lightning strike to the occiput causing a large central visual field defect and bilateral tympanic membrane ruptures is described. Owing to extreme agitation, the patient was sent into a drug-induced coma for 3 days. After extubation, she experienced simple and complex visual hallucinations for several days, but otherwise largely recovered. Neuropsychological tests revealed deficits in fast visual detection tasks and non-verbal learning and indicated a right temporal lobe dysfunction, consistent with a right temporal focus on electroencephalography. At 4 months after the accident, she developed a psychological reaction consisting of nightmares, with reappearance of the complex visual hallucinations and a depressive syndrome. Using the European Cooperation for Lightning Detection network, a meteorological system for lightning surveillance, the exact geographical location and nature of the lightning strike were retrospectively retraced PMID:21734915

  17. Can visual evoked potentials be used in biometric identification?

    Science.gov (United States)

    Power, Alan J; Lalor, Edmund C; Reilly, Richard B

    2006-01-01

    Due to known differences in the anatomical structure of the visual pathways and generators in different individuals, the use of visual evoked potentials offers the possibility of an alternative to existing biometrics methods. A study based on visual evoked potentials from 13 individuals was carried out to assess the best combination of temporal, spectral and AR modeling features to realize a robust biometric. From the results it can be concluded that visual evoked potentials show considerable biometric qualities, with classification accuracies reaching a high of 86.54% and that a specific temporal and spectral combination was found to be optimal. Based on these results the visual evoked potential may be a useful tool in biometric identification when used in conjunction with more established biometric methods.

  18. Visual short-term memory for high resolution associations is impaired in patients with medial temporal lobe damage.

    Science.gov (United States)

    Koen, Joshua D; Borders, Alyssa A; Petzold, Michael T; Yonelinas, Andrew P

    2017-02-01

    The medial temporal lobe (MTL) plays a critical role in episodic long-term memory, but whether the MTL is necessary for visual short-term memory is controversial. Some studies have indicated that MTL damage disrupts visual short-term memory performance whereas other studies have failed to find such evidence. To account for these mixed results, it has been proposed that the hippocampus is critical in supporting short-term memory for high resolution complex bindings, while the cortex is sufficient to support simple, low resolution bindings. This hypothesis was tested in the current study by assessing visual short-term memory in patients with damage to the MTL and controls for high resolution and low resolution object-location and object-color associations. In the location tests, participants encoded sets of two or four objects in different locations on the screen. After each set, participants performed a two-alternative forced-choice task in which they were required to discriminate the object in the target location from the object in a high or low resolution lure location (i.e., the object locations were very close or far away from the target location, respectively). Similarly, in the color tests, participants were presented with sets of two or four objects in a different color and, after each set, were required to discriminate the object in the target color from the object in a high or low resolution lure color (i.e., the lure color was very similar or very different, respectively, to the studied color). The patients were significantly impaired in visual short-term memory, but importantly, they were more impaired for high resolution object-location and object-color bindings. The results are consistent with the proposal that the hippocampus plays a critical role in forming and maintaining complex, high resolution bindings. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Postdictive modulation of visual orientation.

    Directory of Open Access Journals (Sweden)

    Takahiro Kawabe

    Full Text Available The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1 or whether the target was vertical or not (Supplementary experiment. The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation. The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2. Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  20. Partial Correlation-Based Retinotopically Organized Resting-State Functional Connectivity Within and Between Areas of the Visual Cortex Reflects More Than Cortical Distance.

    Science.gov (United States)

    Dawson, Debra Ann; Lam, Jack; Lewis, Lindsay B; Carbonell, Felix; Mendola, Janine D; Shmuel, Amir

    2016-02-01

    Numerous studies have demonstrated functional magnetic resonance imaging (fMRI)-based resting-state functional connectivity (RSFC) between cortical areas. Recent evidence suggests that synchronous fluctuations in blood oxygenation level-dependent fMRI reflect functional organization at a scale finer than that of visual areas. In this study, we investigated whether RSFCs within and between lower visual areas are retinotopically organized and whether retinotopically organized RSFC merely reflects cortical distance. Subjects underwent retinotopic mapping and separately resting-state fMRI. Visual areas V1, V2, and V3, were subdivided into regions of interest (ROIs) according to quadrants and visual field eccentricity. Functional connectivity (FC) was computed based on Pearson's linear correlation (correlation), and Pearson's linear partial correlation (correlation between two time courses after the time courses from all other regions in the network are regressed out). Within a quadrant, within visual areas, all correlation and nearly all partial correlation FC measures showed statistical significance. Consistently in V1, V2, and to a lesser extent in V3, correlation decreased with increasing eccentricity separation. Consistent with previously reported monkey anatomical connectivity, correlation/partial correlation values between regions from adjacent areas (V1-V2 and V2-V3) were higher than those between nonadjacent areas (V1-V3). Within a quadrant, partial correlation showed consistent significance between regions from two different areas with the same or adjacent eccentricities. Pairs of ROIs with similar eccentricity showed higher correlation/partial correlation than pairs distant in eccentricity. Between dorsal and ventral quadrants, partial correlation between common and adjacent eccentricity regions within a visual area showed statistical significance; this extended to more distant eccentricity regions in V1. Within and between quadrants, correlation decreased

  1. The Voronoi spatio-temporal data structure

    Science.gov (United States)

    Mioc, Darka

    2002-04-01

    information. This formal model of spatio-temporal change representation is currently applied to retroactive map updates and visualization of map evolution. It offers new possibilities in the domains of temporal GIS, transaction processing, spatio-temporal queries, spatio-temporal analysis, map animation and map visualization.

  2. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  3. Spatial-Temporal Variations of Chlorophyll-a in the Adjacent Sea Area of the Yangtze River Estuary Influenced by Yangtze River Discharge

    Science.gov (United States)

    Wang, Ying; Jiang, Hong; Jin, Jiaxin; Zhang, Xiuying; Lu, Xuehe; Wang, Yueqi

    2015-01-01

    Carrying abundant nutrition, terrigenous freshwater has a great impact on the spatial and temporal heterogeneity of phytoplankton in coastal waters. The present study analyzed the spatial-temporal variations of Chlorophyll-a (Chl-a) concentration under the influence of discharge from the Yangtze River, based on remotely sensed Chl-a concentrations. The study area was initially zoned to quantitatively investigate the spatial variation patterns of Chl-a. Then, the temporal variation of Chl-a in each zone was simulated by a sinusoidal curve model. The results showed that in the inshore waters, the terrigenous discharge was the predominant driving force determining the pattern of Chl-a, which brings the risk of red tide disasters; while in the open sea areas, Chl-a was mainly affected by meteorological factors. Furthermore, a diversity of spatial and temporal variations of Chl-a existed based on the degree of influences from discharge. The diluted water extended from inshore to the east of Jeju Island. This process affected the Chl-a concentration flowing through the area, and had a potential impact on the marine environment. The Chl-a from September to November showed an obvious response to the discharge from July to September with a lag of 1 to 2 months. PMID:26006121

  4. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    Science.gov (United States)

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  5. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    Science.gov (United States)

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  6. Advances in temporal logic

    CERN Document Server

    Fisher, Michael; Gabbay, Dov; Gough, Graham

    2000-01-01

    Time is a fascinating subject that has captured mankind's imagination from ancient times to the present. It has been, and continues to be studied across a wide range of disciplines, from the natural sciences to philosophy and logic. More than two decades ago, Pnueli in a seminal work showed the value of temporal logic in the specification and verification of computer programs. Today, a strong, vibrant international research community exists in the broad community of computer science and AI. This volume presents a number of articles from leading researchers containing state-of-the-art results in such areas as pure temporal/modal logic, specification and verification, temporal databases, temporal aspects in AI, tense and aspect in natural language, and temporal theorem proving. Earlier versions of some of the articles were given at the most recent International Conference on Temporal Logic, University of Manchester, UK. Readership: Any student of the area - postgraduate, postdoctoral or even research professor ...

  7. [Spatio-temporal distribution of scrub typhus and related influencing factors in coastal beach area of Yancheng, China].

    Science.gov (United States)

    Chen, Y Z; Li, F; Xu, H; Huang, L C; Gu, Z G; Sun, Z Y; Yan, G J; Zhu, Y J; Tang, C

    2016-02-01

    In order to provide better programs on monitoring, early warning and prevention of Scrub Typhus in the coastal beach area, temporal-spatial distribution characteristics of scrub typhus were summarized. Relationships between temporal-spatial clustering of Scrub Typhus, meteorological factors, rodent distribution and the biological characteristics in coastal beach area of Yancheng city, were studied. Reports on network-based Scrub Typhus epidemics and information on population, weather situation through monitoring those stations, from 2005 to 2014 were collected and processed, in the coastal beach area of Yancheng city. Distribution, density of the population concerned and seasonal fluctuation on rodents were monitored in coastal beach area, from April 2011 to December, 2013. METHODS as descriptive statistics, space-time permutation scantistics, autocorrelation and Cross-correlation analysis etc, were used to analyze the temporal-spatial distribution of Scrub Typhus and correlation with rodent distribution, density fluctuation and meteorological indexes. Zero-inflated Pearson (ZIP) regression model was contributed according to the distribution of related data. All methods were calculated under Excel 2003, SPSS 16.0, Mapinfo 11.0, Satscan 9.0 and Stata/SE 10.0 softwares. (1) The incidence of Scrub Typhus was gradually increasing and the highest incidence of the year was seen in 2014, as 5.81/10 million. There was an autumn peak of Scrub typhus, with the highest incidence rate as 12.02/10 million in November. The incidence rate of Scrub typhus appeared high in Binhai, Dafeng and Xiangshui, with the average incidence rates appeared as 3.30/10 million, 3.21/10 million and 2.79/10 million, respectively. There were 12 towns with high incidence rates in the coastal beach area, with incidence rate showed between 4.41/10 and 10.03/10 million. (2) There were three incidence clusters of Scrub typhus seen in 25 towns, between October 2012 and November 2012 in Dongtai, Dafeng

  8. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  9. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas - A review

    NARCIS (Netherlands)

    Cristiano, E.; ten Veldhuis, J.A.E.; van de Giesen, N.C.

    2017-01-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological

  10. Towards Visual Navigation of an Autonomous Underwater Vehicle in Areas with Posidonia Oceanica

    Directory of Open Access Journals (Sweden)

    Francisco Bonin-Font

    2017-12-01

    Full Text Available This paper presents an exhaustive, extensive and detailed experimental assessment of different types of visual key-points in terms of robustness, stability and traceability, in images taken in marine areas densely colonized with Posidonia Oceanica (P.O.. This work has been focused mainly in two issues: a evaluating the  capacity of several image color and contrast enhancing preprocessing techniques to increase the image quality and the number of stable features, and b finding the pair feature detector/descriptor, from a wide range of different combinations, that maximizes the number of inlier correspondences in consecutive frames or frames that close a loop (images that overlap, taken at distant time instants, from different viewpoints or even with different environmental conditions. Conclusions extracted from both evaluations will affect directly the quality of visual odometers and/or the image registration processes involved in visual SLAM approaches.

  11. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    Science.gov (United States)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  12. Dynamic perfusion CT: Optimizing the temporal resolution for the calculation of perfusion CT parameters in stroke patients

    Energy Technology Data Exchange (ETDEWEB)

    Kaemena, Andreas [Department of Radiology, Charite-Medical University Berlin, Augustenburger Platz 1, D-13353 Berlin (Germany)], E-mail: andreas.kaemena@charite.de; Streitparth, Florian; Grieser, Christian; Lehmkuhl, Lukas [Department of Radiology, Charite-Medical University Berlin, Augustenburger Platz 1, D-13353 Berlin (Germany); Jamil, Basil [Department of Radiotherapy, Charite-Medical University Berlin, Schumannstr. 20/21, D-10117 Berlin (Germany); Wojtal, Katarzyna; Ricke, Jens; Pech, Maciej [Department of Radiology, Charite-Medical University Berlin, Augustenburger Platz 1, D-13353 Berlin (Germany)

    2007-10-15

    Purpose: To assess the influence of different temporal sampling rates on the accuracy of the results from cerebral perfusion CTs in patients with an acute ischemic stroke. Material and methods: Thirty consecutive patients with acute stroke symptoms received a dynamic perfusion CT (LightSpeed 16, GE). Forty millilitres of iomeprol (Imeron 400) were administered at an injection rate of 4 ml/s. After a scan delay of 7 s, two adjacent 10 mm slices at 80 kV and 190 mA were acquired in a cine mode technique with a cine duration of 49 s. Parametric maps for the blood flow (BF), blood volume (BV) and mean transit time (MTT) were calculated for temporal sampling intervals of 0.5, 1, 2, 3 and 4 s using GE's Perfusion 3 software package. In addition to the quantitative ROI data analysis, a visual perfusion map analysis was performed. Results: The perfusion analysis proved to be technically feasible with all patients. The calculated perfusion values revealed significant differences with regard to the BF, BV and MTT, depending on the employed temporal resolution. The perfusion contrast between ischemic lesions and healthy brain tissue decreased continuously at the lower temporal resolutions. The visual analysis revealed that ischemic lesions were best depicted with sampling intervals of 0.5 and 1 s. Conclusion: We recommend a temporal scan resolution of two images per second for the best detection and depiction of ischemic areas.

  13. Supramodal processing optimizes visual perceptual learning and plasticity.

    Science.gov (United States)

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory

  14. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    Directory of Open Access Journals (Sweden)

    Mark eLaing

    2015-10-01

    Full Text Available The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we use amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only or auditory-visual (AV trials in the scanner. On AV trials, the auditory and visual signal could have the same (AV congruent or different modulation rates (AV incongruent. Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for auditory-visual integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  15. Female carpet weavers' visual acuity and effective factors: Fordu rural area of Qom

    Directory of Open Access Journals (Sweden)

    Khajenasiri F.

    2007-11-01

    Full Text Available Background: Healthy vision of workers is one of the important elements in carpet weaving industry which has essential role in improving of the job quality and efficiency. Visual acuity is the primary outcome measure in most studies involving eye diseases. Ihe aim of this study was determination of visual acuity and its effective factors in women carpet weaver in fordu rural area of Qom has been investigated.  Methods: In a cross-sectional (descriptive-analytical study visual acuity of 177 women carpet weaver has been determined. Job hours per day, job history, age, literacy, trauma history, diabetes history, family history of eye diseases, eye diseases history and family size  of subjects were determined .Results: The results of study indicated that the right eyes visual acuity of 72.4 % of women were desirable (0 - 0.8 and 27.6 % were undesirable (0.9-1.2. These results for the left eyes were 69.5 % and 30.5 % respectively. In addition, the result showed that with increasing the job hours and history and age, percent of women with undesirable both eyes visual acuity were increased. With higher literacy levels, percent of women with undesirable both eyes visual acuity decreased. In subjects with truma history, the undesirable visual acuity was higher. In this study the relation between visual acuity and job history, age, literacy, truma history and eye diseases history were statistically significant (in all cases P<0.05.Conclusions: High percentage of women carpet weaver were in undesirable  visual acuity and in this study the relation between visual acuity and job history, age, literacy, trauma history and eye diseases history were statistically significant (in all cases P<0.05.

  16. Comparison of capacity for diagnosis and visuality of auditory ossicles at different scanning angles in the computed tomography of temporal bone

    International Nuclear Information System (INIS)

    Ogura, Akio; Nakayama, Yoshiki

    1992-01-01

    Computed tomographic (CT) scanning has made significant contributions to the diagnosis and evaluation of temporal bone lesions by the thin-section, high-resolution techniques. However, these techniques involve greater radiation exposure to the lens of patients. A mean was thus sought for reducing the radiation exposure at different scanning angles such as +15 degrees and -10 degrees to the Reid's base line. Purposes of this study were to measure radiation exposure to the lens using the two tomographic planes and to compare the ability to visualize auditory ossicles and labyrinthine structures. Visual evaluation of tomographic images on auditory ossicles was made by blinded methods using four rankings by six radiologists. The statistical significance of the intergroup difference in the visualization of tomographic planes was assessed for a significance level of 0.01. Thermoluminescent dosimeter chips were placed on the cornea of tissue equivalent to the skull phantom to evaluate radiation exposure for two separate tomographic planes. As the result, tomographic plane at an angle of -10 degrees to Reid's base line allowed better visualization than the other plane for the malleus, incus, facial nerve canal, and tuba auditiva (p<0.01). Scannings at an angle of -10 degrees to Reid's base line reduced radiation exposure to approximately one-fiftieth (1/50) that with the scans at the other angle. (author)

  17. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    2010-06-01

    Full Text Available An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order

  18. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Science.gov (United States)

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be

  19. Breaking the excitation-inhibition balance makes the cortical network’s space-time dynamics distinguish simple visual scenes

    DEFF Research Database (Denmark)

    Roland, Per E.; Bonde, Lars H.; Forsberg, Lars E.

    2017-01-01

    Brain dynamics are often taken to be temporal dynamics of spiking and membrane potentials in a balanced network. Almost all evidence for a balanced network comes from recordings of cell bodies of few single neurons, neglecting more than 99% of the cortical network. We examined the space......-time dynamics of excitation and inhibition simultaneously in dendrites and axons over four visual areas of ferrets exposed to visual scenes with stationary and moving objects. The visual stimuli broke the tight balance between excitation and inhibition such that the network exhibited longer episodes of net...... excitation subsequently balanced by net inhibition, in contrast to a balanced network. Locally in all four areas the amount of net inhibition matched the amount of net excitation with a delay of 125 ms. The space-time dynamics of excitation-inhibition evolved to reduce the complexity of neuron interactions...

  20. Frequency modulation of neural oscillations according to visual task demands.

    Science.gov (United States)

    Wutz, Andreas; Melcher, David; Samaha, Jason

    2018-02-06

    Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.

  1. TransVisuality : The Cultural Dimension of Visuality

    DEFF Research Database (Denmark)

    The Transvisuality Project In little more than a decade, visual culture has proven its status and commitment as an independent field of research, drawing on and continuing areas such as art history, cultural studies, semiotics and media research, as well as parts of visual sociology, visual...... for visual culture, transcending a number of disciplinary and geographical borders. The first volume, ‘Boundaries and Creative Openings’, explores the implications of a cultural dimension of ‘visuality’ when seen as a concept reflecting and challenging fundamental aspects of culture, from the arts to social...... anthropology and visual communication. Visual culture is now a well-established academic area of research and teaching, covering subjects in the humanities and social sciences. Readers and introductions have outlined the field, and research is mirrored in networks, journals and conferences on the national...

  2. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    Science.gov (United States)

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  3. Temporal lobe epilepsy with varying severity: MRI study of 222 patients

    International Nuclear Information System (INIS)

    Lehericy, S.; Hasboun, D.; Dormont, D.; Marsault, C.; Semah, F.; Baulac, M.; Clemenceau, S.; Granat, O.

    1997-01-01

    MRI was performed in 222 consecutive adult patients with temporal lobe epilepsy of varying severity from January 1991 to May 1993. The diagnosis of hippocampal sclerosis was established visually by three independent observers. The accuracy of visual assessment of hippocampal asymmetry was compared with volumetric measurements. Neuropathological correlations were obtained in 63 patients with refractory seizures. Temporal lobe abnormalities were observed in 180 patients (81 %) as follows: hippocampal sclerosis in 122 (55 %); developmental abnormalities in 16 (7.2 %); tumours in 15 (6.8 %); scars in 11 (5 %); cavernous angiomas in 10 (4.5 %); miscellaneous lesions in 6. MRI was normal or showed unrelated changes in 42 patients (19 %). Visual assessment correctly lateralised hippocampal sclerosis in 79 of the 84 patients measured (94 %). Temporal lobectomy confirmed the MRI data (side and aetiology) in all 63 operated patients. Patients with normal MRI had an older age of seizure onset and were more often drug-responsive than patients with hippocampal sclerosis. MRI showed temporal lobe abnormalities in 81 % of epileptic patients with varying severity with good neuropathological correlation. Patients with normal MRI had a less severe form of the disease. (orig.)

  4. Clinical profile and visual outcome of ocular injuries in a rural area of western India.

    Science.gov (United States)

    Misra, Somen; Nandwani, Rupali; Gogri, Pratik; Misra, Neeta

    2013-01-01

    Ocular trauma is a major cause of visual impairment and morbidity worldwide. To identify the various type of ocular injury in a rural area, determine the presence of any associated visual damage and assess the final visual outcome after treatment. Hospital-based, prospective study conducted over a period of two years. A total of 60 patients of ocular trauma were included. Ocular injuries were more commonly seen in adult (55 per cent) patients who were associated with agricultural work (43.33 per cent). They were more common in male patients (71.67 per cent). Closed globe injury (68.33 per cent) was more common than open globe injury (31.67 per cent). Both in open and closed globe injuries, the commonest object causing injury was a wooden stick. Just 26.7 per cent of the patients had a visual acuity better than 6/60 at presentation; while after completed treatment at two months follow-up, 68.3 per cent had best corrected visual acuity better than 6/60. Agricultural trauma is an important cause of monocular blindness in rural India. The visual outcome depends upon the site and size of the injury and the extent of the ocular damage.

  5. Incorporating temporal variation in seabird telemetry data: time variant kernel density models

    Science.gov (United States)

    Gilbert, Andrew; Adams, Evan M.; Anderson, Carl; Berlin, Alicia; Bowman, Timothy D.; Connelly, Emily; Gilliland, Scott; Gray, Carrie E.; Lepage, Christine; Meattey, Dustin; Montevecchi, William; Osenkowski, Jason; Savoy, Lucas; Stenhouse, Iain; Williams, Kathryn

    2015-01-01

    A key component of the Mid-Atlantic Baseline Studies project was tracking the individual movements of focal marine bird species (Red-throated Loon [Gavia stellata], Northern Gannet [Morus bassanus], and Surf Scoter [Melanitta perspicillata]) through the use of satellite telemetry. This element of the project was a collaborative effort with the Department of Energy (DOE), Bureau of Ocean Energy Management (BOEM), the U.S. Fish and Wildlife Service (USFWS), and Sea Duck Joint Venture (SDJV), among other organizations. Satellite telemetry is an effective and informative tool for understanding individual animal movement patterns, allowing researchers to mark an individual once, and thereafter follow the movements of the animal in space and time. Aggregating telemetry data from multiple individuals can provide information about the spatial use and temporal movements of populations. Tracking data is three dimensional, with the first two dimensions, X and Y, ordered along the third dimension, time. GIS software has many capabilities to store, analyze and visualize the location information, but little or no support for visualizing the temporal data, and tools for processing temporal data are lacking. We explored several ways of analyzing the movement patterns using the spatiotemporal data provided by satellite tags. Here, we present the results of one promising method: time-variant kernel density analysis (Keating and Cherry, 2009). The goal of this chapter is to demonstrate new methods in spatial analysis to visualize and interpret tracking data for a large number of individual birds across time in the mid-Atlantic study area and beyond. In this chapter, we placed greater emphasis on analytical methods than on the behavior and ecology of the animals tracked. For more detailed examinations of the ecology and wintering habitat use of the focal species in the midAtlantic, see Chapters 20-22.

  6. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  7. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    Science.gov (United States)

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  8. Wavefront coherence area for predicting visual acuity of post-PRK and post-PARK refractive surgery patients

    Science.gov (United States)

    Garcia, Daniel D.; van de Pol, Corina; Barsky, Brian A.; Klein, Stanley A.

    1999-06-01

    Many current corneal topography instruments (called videokeratographs) provide an `acuity index' based on corneal smoothness to analyze expected visual acuity. However, post-refractive surgery patients often exhibit better acuity than is predicted by such indices. One reason for this is that visual acuity may not necessarily be determined by overall corneal smoothness but rather by having some part of the cornea able to focus light coherently onto the fovea. We present a new method of representing visual acuity by measuring the wavefront aberration, using principles from both ray and wave optics. For each point P on the cornea, we measure the size of the associated coherence area whose optical path length (OPL), from a reference plane to P's focus, is within a certain tolerance of the OPL for P. We measured the topographies and vision of 62 eyes of patients who had undergone the corneal refractive surgery procedures of photorefractive keratectomy (PRK) and photorefractive astigmatic keratectomy (PARK). In addition to high contrast visual acuity, our vision tests included low contrast and low luminance to test the contribution of the PRK transition zone. We found our metric for visual acuity to be better than all other metrics at predicting the acuity of low contrast and low luminance. However, high contrast visual acuity was poorly predicted by all of the indices we studied, including our own. The indices provided by current videokeratographs sometimes fail for corneas whose shape differs from simple ellipsoidal models. This is the case with post-PRK and post-PARK refractive surgery patients. Our alternative representation that displays the coherence area of the wavefront has considerable advantages, and promises to be a better predictor of low contrast and low luminance visual acuity than current shape measures.

  9. Crossmodal plasticity in auditory, visual and multisensory cortical areas following noise-induced hearing loss in adulthood.

    Science.gov (United States)

    Schormans, Ashley L; Typlt, Marei; Allman, Brian L

    2017-01-01

    Complete or partial hearing loss results in an increased responsiveness of neurons in the core auditory cortex of numerous species to visual and/or tactile stimuli (i.e., crossmodal plasticity). At present, however, it remains uncertain how adult-onset partial hearing loss affects higher-order cortical areas that normally integrate audiovisual information. To that end, extracellular electrophysiological recordings were performed under anesthesia in noise-exposed rats two weeks post-exposure (0.8-20 kHz at 120 dB SPL for 2 h) and age-matched controls to characterize the nature and extent of crossmodal plasticity in the dorsal auditory cortex (AuD), an area outside of the auditory core, as well as in the neighboring lateral extrastriate visual cortex (V2L), an area known to contribute to audiovisual processing. Computer-generated auditory (noise burst), visual (light flash) and combined audiovisual stimuli were delivered, and the associated spiking activity was used to determine the response profile of each neuron sampled (i.e., unisensory, subthreshold multisensory or bimodal). In both the AuD cortex and the multisensory zone of the V2L cortex, the maximum firing rates were unchanged following noise exposure, and there was a relative increase in the proportion of neurons responsive to visual stimuli, with a concomitant decrease in the number of neurons that were solely responsive to auditory stimuli despite adjusting the sound intensity to account for each rat's hearing threshold. These neighboring cortical areas differed, however, in how noise-induced hearing loss affected audiovisual processing; the total proportion of multisensory neurons significantly decreased in the V2L cortex (control 38.8 ± 3.3% vs. noise-exposed 27.1 ± 3.4%), and dramatically increased in the AuD cortex (control 23.9 ± 3.3% vs. noise-exposed 49.8 ± 6.1%). Thus, following noise exposure, the cortical area showing the greatest relative degree of multisensory convergence

  10. A Multi-Area Stochastic Model for a Covert Visual Search Task.

    Directory of Open Access Journals (Sweden)

    Michael A Schwemmer

    Full Text Available Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.

  11. Spatial and temporal consumption dynamics of trout in catch-and-release areas in Arkansas tailwaters

    Science.gov (United States)

    Flinders, John M.; Magoulick, Daniel D.

    2017-01-01

    Restrictive angling regulations in tailwater trout fisheries may be unsuccessful if food availability limits energy for fish to grow. We examined spatial and temporal variation in energy intake and growth in populations of Brown Trout Salmo trutta and Rainbow Trout Oncorhynchus mykiss within three catch-and-release (C-R) areas in Arkansas tailwaters to evaluate food availability compared with consumption. Based on bioenergetic simulations, Rainbow Trout fed at submaintenance levels in both size-classes (≤400 mm TL, >400 mm TL) throughout most seasons. A particular bottleneck in food availability occurred in the winter for Rainbow Trout when the daily ration was substantially below the minimum required for maintenance, despite reduced metabolic costs associated with lower water temperatures. Rainbow Trout growth rates followed a similar pattern to consumption with negative growth rates during the winter periods. All three size-classes (400 mm TL) of Brown Trout experienced high growth rates and limited temporal bottlenecks in food availability. We observed higher mean densities for Rainbow Trout (47–342 fish/ha) than for Brown Trout (3–84 fish/ha) in all C-R areas. Lower densities of Brown Trout coupled with an ontogenetic shift towards piscivory may have allowed for higher growth rates and sufficient consumption rates to meet energetic demands. Brown Trout at current densities were more effective in maintaining adequate growth rates and larger sizes in C-R areas than were Rainbow Trout. Bioenergetic simulations suggest that reducing stocking levels of Rainbow Trout in the tailwaters may be necessary in order to achieve increased catch rates of larger trout in the C-R areas.

  12. Walk-related mimic word activates the extrastriate visual cortex in the human brain: an fMRI study.

    Science.gov (United States)

    Osaka, Naoyuki

    2009-03-02

    I present an fMRI study demonstrating that a mimic word highly suggestive of human walking, heard by the ear with eyes closed, significantly activates the visual cortex located in extrastriate occipital region (BA19, 18) and superior temporal sulcus (STS) while hearing non-sense words that do not imply walk under the same task does not activate these areas in humans. I concluded that BA19 and 18 would be a critical region for generating visual images of walking and related intentional stance, respectively, evoked by an onomatopoeia word that implied walking.

  13. Temporal distribution of sediment yield from catchments covered by different pine plantation areas

    Directory of Open Access Journals (Sweden)

    Tyas Mutiara Basuki

    2018-04-01

    Full Text Available Soil erosion and sedimentation are environmental problems faced by tropical countries. Many researches on soil erosion-sedimentation have been conducted with various results. Quantifying soil erosion-sedimentation and its temporal distribution are important for watershed management. Therefore, a study with the objective to quantify the amount of suspended sediment from catchments under various pine plantation areas was conducted. The research was undertaken during 2010 to 2017 in seven catchments with various percentage of pine coverage in Kebumen Regency, Central Java Province. The rainfall data were collected from two rainfall stations. A tide gauge was installed at the outlet of each catchment to monitor stream water level. The water samples for every stream water level increment were analyzed to obtain sediment concentration. The results showed that monthly suspended sediment of the catchments was high in January to April and October to December, and low in May to September. The annual suspended sediment fluctuated during the study period. Non-linear correlations were observed between suspended sediment and rainfall as well as suspended sediment and percentage pine areas. The line trend between suspended sediment and percentage of pine areas showed that the increase in pine areas decreased suspended sediment, with the slope of the graph is sharp at the percentage of pine areas from 8% to 40%, then is gentle for pine plantation areas more than 40%.

  14. A computational theory of visual receptive fields.

    Science.gov (United States)

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative

  15. Visual rating of medial temporal lobe metabolism in mild cognitive impairment and Alzheimer's disease using FDG-PET

    Energy Technology Data Exchange (ETDEWEB)

    Mosconi, Lisa [New York University School of Medicine, Department of Psychiatry, New York, NY (United States); University of Florence, Department of Clinical Pathophysiology, Nuclear Medicine Unit, Florence (Italy); New York University School of Medicine, Center for Brain Health, New York, NY (United States); Santi, Susan De; Li, Yi; Li, Juan; Zhan, Jiong; Boppana, Madhu [New York University School of Medicine, Department of Psychiatry, New York, NY (United States); Tsui, Wai Hon; Leon, Mony J. de [New York University School of Medicine, Department of Psychiatry, New York, NY (United States); Nathan Kline Institute, Orangeburg, NY (United States); Pupi, Alberto [University of Florence, Department of Clinical Pathophysiology, Nuclear Medicine Unit, Florence (Italy)

    2006-02-01

    This study was designed to examine the utility of visual inspection of medial temporal lobe (MTL) metabolism in the diagnosis of mild cognitive impairment (MCI) and Alzheimer's disease (AD) using FDG-PET scans. Seventy-five subjects [27 normal controls (NL), 26 MCI, and 22 AD] with FDG-PET and MRI scans were included in this study. We developed a four-point visual rating scale to evaluate the presence and severity of MTL hypometabolism on FDG-PET scans. The visual MTL ratings were compared with quantitative glucose metabolic rate (MR{sub glc}) data extracted using regions of interest (ROIs) from the MRI-coregistered PET scans of all subjects. A standard rating evaluation of neocortical hypometabolism was also completed. Logistic regressions were used to determine and compare the diagnostic accuracy of the MTL and cortical ratings. For both MTL and cortical ratings, high intra- and inter-rater reliabilities were found (p values <0.001). The MTL rating was highly correlated with and yielded a diagnostic accuracy equivalent to the ROI MR{sub glc} measures (p values <0.001). The combination of MTL and cortical ratings significantly improved the diagnostic accuracy over the cortical rating alone, with 100% of AD, 77% of MCI, and 85% of NL cases being correctly identified. This study shows that the visual rating of MTL hypometabolism on PET is reliable, yields a diagnostic accuracy equal to the quantitative ROI measures, and is clinically useful and more sensitive than cortical ratings for patients with MCI. We suggest this method be further evaluated for its potential in the early diagnosis of AD. (orig.)

  16. Neuronal representations of stimulus associations develop in the temporal lobe during learning.

    Science.gov (United States)

    Messinger, A; Squire, L R; Zola, S M; Albright, T D

    2001-10-09

    Visual stimuli that are frequently seen together become associated in long-term memory, such that the sight of one stimulus readily brings to mind the thought or image of the other. It has been hypothesized that acquisition of such long-term associative memories proceeds via the strengthening of connections between neurons representing the associated stimuli, such that a neuron initially responding only to one stimulus of an associated pair eventually comes to respond to both. Consistent with this hypothesis, studies have demonstrated that individual neurons in the primate inferior temporal cortex tend to exhibit similar responses to pairs of visual stimuli that have become behaviorally associated. In the present study, we investigated the role of these areas in the formation of conditional visual associations by monitoring the responses of individual neurons during the learning of new stimulus pairs. We found that many neurons in both area TE and perirhinal cortex came to elicit more similar neuronal responses to paired stimuli as learning proceeded. Moreover, these neuronal response changes were learning-dependent and proceeded with an average time course that paralleled learning. This experience-dependent plasticity of sensory representations in the cerebral cortex may underlie the learning of associations between objects.

  17. Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data

    Directory of Open Access Journals (Sweden)

    Y Kubota

    2013-03-01

    Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.

  18. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  19. The putative visual word form area is functionally connected to the dorsal attention network.

    Science.gov (United States)

    Vogel, Alecia C; Miezin, Fran M; Petersen, Steven E; Schlaggar, Bradley L

    2012-03-01

    The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level-dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading.

  20. Attention increases the temporal precision of conscious perception: verifying the Neural-ST Model.

    Directory of Open Access Journals (Sweden)

    Srivas Chennu

    2009-11-01

    Full Text Available What role does attention play in ensuring the temporal precision of visual perception? Behavioural studies have investigated feature selection and binding in time using fleeting sequences of stimuli in the Rapid Serial Visual Presentation (RSVP paradigm, and found that temporal accuracy is reduced when attentional control is diminished. To reduce the efficacy of attentional deployment, these studies have employed the Attentional Blink (AB phenomenon. In this article, we use electroencephalography (EEG to directly investigate the temporal dynamics of conscious perception. Specifically, employing a combination of experimental analysis and neural network modelling, we test the hypothesis that the availability of attention reduces temporal jitter in the latency between a target's visual onset and its consolidation into working memory. We perform time-frequency analysis on data from an AB study to compare the EEG trials underlying the P3 ERPs (Event-related Potential evoked by targets seen outside vs. inside the AB time window. We find visual differences in phase-sorted ERPimages and statistical differences in the variance of the P3 phase distributions. These results argue for increased variation in the latency of conscious perception during the AB. This experimental analysis is complemented by a theoretical exploration of temporal attention and target processing. Using activation traces from the Neural-ST(2 model, we generate virtual ERPs and virtual ERPimages. These are compared to their human counterparts to propose an explanation of how target consolidation in the context of the AB influences the temporal variability of selective attention. The AB provides us with a suitable phenomenon with which to investigate the interplay between attention and perception. The combination of experimental and theoretical elucidation in this article contributes to converging evidence for the notion that the AB reflects a reduction in the temporal acuity of

  1. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas - a review

    Science.gov (United States)

    Cristiano, Elena; ten Veldhuis, Marie-claire; van de Giesen, Nick

    2017-07-01

    In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological response, based on small-scale representation of urban catchment spatial variability. Despite these efforts, interactions between rainfall variability, catchment heterogeneity, and hydrological response remain poorly understood. This paper presents a review of our current understanding of hydrological processes in urban environments as reported in the literature, focusing on their spatial and temporal variability aspects. We review recent findings on the effects of rainfall variability on hydrological response and identify gaps where knowledge needs to be further developed to improve our understanding of and capability to predict urban hydrological response.

  2. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    Science.gov (United States)

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  3. Neuronal codes for visual perception and memory.

    Science.gov (United States)

    Quian Quiroga, Rodrigo

    2016-03-01

    In this review, I describe and contrast the representation of stimuli in visual cortical areas and in the medial temporal lobe (MTL). While cortex is characterized by a distributed and implicit coding that is optimal for recognition and storage of semantic information, the MTL shows a much sparser and explicit coding of specific concepts that is ideal for episodic memory. I will describe the main characteristics of the coding in the MTL by the so-called concept cells and will then propose a model of the formation and recall of episodic memory based on partially overlapping assemblies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  5. On the role of visual experience in mathematical development: Evidence from blind mathematicians.

    Science.gov (United States)

    Amalric, Marie; Denghien, Isabelle; Dehaene, Stanislas

    2018-04-01

    Advanced mathematical reasoning, regardless of domain or difficulty, activates a reproducible set of bilateral brain areas including intraparietal, inferior temporal and dorsal prefrontal cortex. The respective roles of genetics, experience and education in the development of this math-responsive network, however, remain unresolved. Here, we investigate the role of visual experience by studying the exceptional case of three professional mathematicians who were blind from birth (n=1) or became blind during childhood (n=2). Subjects were scanned with fMRI while they judged the truth value of spoken mathematical and nonmathematical statements. Blind mathematicians activated the classical network of math-related areas during mathematical reflection, similar to that found in a group of sighted professional mathematicians. Thus, brain networks for advanced mathematical reasoning can develop in the absence of visual experience. Additional activations were found in occipital cortex, even in individuals who became blind during childhood, suggesting that either mental imagery or a more radical repurposing of visual cortex may occur in blind mathematicians. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. On the role of visual experience in mathematical development: Evidence from blind mathematicians

    Directory of Open Access Journals (Sweden)

    Marie Amalric

    2018-04-01

    Full Text Available Advanced mathematical reasoning, regardless of domain or difficulty, activates a reproducible set of bilateral brain areas including intraparietal, inferior temporal and dorsal prefrontal cortex. The respective roles of genetics, experience and education in the development of this math-responsive network, however, remain unresolved. Here, we investigate the role of visual experience by studying the exceptional case of three professional mathematicians who were blind from birth (n = 1 or became blind during childhood (n = 2. Subjects were scanned with fMRI while they judged the truth value of spoken mathematical and nonmathematical statements. Blind mathematicians activated the classical network of math-related areas during mathematical reflection, similar to that found in a group of sighted professional mathematicians. Thus, brain networks for advanced mathematical reasoning can develop in the absence of visual experience. Additional activations were found in occipital cortex, even in individuals who became blind during childhood, suggesting that either mental imagery or a more radical repurposing of visual cortex may occur in blind mathematicians. Keywords: Advanced mathematical development, Blindness, Functional MRI

  7. Deconstruction of spatial integrity in visual stimulus detected by modulation of synchronized activity in cat visual cortex.

    Science.gov (United States)

    Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2008-04-02

    Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.

  8. Identifying the spatial and temporal variability of economic opportunity costs to promote the adoption of alternative land uses in grain growing agricultural areas: an Australian example.

    Science.gov (United States)

    Lyle, G; Bryan, B A; Ostendorf, B

    2015-05-15

    Grain growers face many future challenges requiring them to adapt their land uses to changing economic, social and environmental conditions. To understand where to make on ground changes without significant negative financial repercussions, high resolution information on income generation over time is required. We propose a methodology which utilises high resolution yield data collected with precision agriculture (PA) technology, gross margin financial analysis and a temporal standardisation technique to highlight the spatial and temporal consistency of farm income. On three neighbouring farms in Western Australia, we found non-linear relationships between income and area. Spatio-temporal analysis on one farm over varying seasons found that between 37 and 49% (1082-1433ha) of cropping area consistently produced above the selected income thresholds and 43-32% (936-1257ha) regularly produced below selected thresholds. Around 20% of area showed inconsistent temporal variation in income generation. Income estimated from these areas represents the income forgone if a land use change is undertaken (the economic opportunity cost) and the average costs varied spatially from $190±114/ha to $560±108/ha depending on what scenario was chosen. The interaction over space and time showed the clustering of areas with similar values at a resolution where growers make input decisions. This new evidence suggests that farm area could be managed with two strategies: (a) one that maximises grain output using PA management in temporally stable areas which generate moderate to high income returns and (b) one that proposes land use change in low and inconsistent income returning areas where the financial returns from an alternative land use may be comparable. The adoption of these strategies can help growers meet the demand for agricultural output and offer income diversity and adaptive capacity to deal with the future challenges to agricultural production. Copyright © 2015 Elsevier Ltd

  9. Functional network connectivity underlying food processing: disturbed salience and visual processing in overweight and obese adults.

    Science.gov (United States)

    Kullmann, Stephanie; Pape, Anna-Antonia; Heni, Martin; Ketterer, Caroline; Schick, Fritz; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert; Veit, Ralf

    2013-05-01

    In order to adequately explore the neurobiological basis of eating behavior of humans and their changes with body weight, interactions between brain areas or networks need to be investigated. In the current functional magnetic resonance imaging study, we examined the modulating effects of stimulus category (food vs. nonfood), caloric content of food, and body weight on the time course and functional connectivity of 5 brain networks by means of independent component analysis in healthy lean and overweight/obese adults. These functional networks included motor sensory, default-mode, extrastriate visual, temporal visual association, and salience networks. We found an extensive modulation elicited by food stimuli in the 2 visual and salience networks, with a dissociable pattern in the time course and functional connectivity between lean and overweight/obese subjects. Specifically, only in lean subjects, the temporal visual association network was modulated by the stimulus category and the salience network by caloric content, whereas overweight and obese subjects showed a generalized augmented response in the salience network. Furthermore, overweight/obese subjects showed changes in functional connectivity in networks important for object recognition, motivational salience, and executive control. These alterations could potentially lead to top-down deficiencies driving the overconsumption of food in the obese population.

  10. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  11. Quantitative Evaluation of Medial Temporal Lobe Morphology in Children with Febrile Status Epilepticus: Results of the FEBSTAT Study.

    Science.gov (United States)

    McClelland, A C; Gomes, W A; Shinnar, S; Hesdorffer, D C; Bagiella, E; Lewis, D V; Bello, J A; Chan, S; MacFall, J; Chen, M; Pellock, J M; Nordli, D R; Frank, L M; Moshé, S L; Shinnar, R C; Sun, S

    2016-12-01

    The pathogenesis of febrile status epilepticus is poorly understood, but prior studies have suggested an association with temporal lobe abnormalities, including hippocampal malrotation. We used a quantitative morphometric method to assess the association between temporal lobe morphology and febrile status epilepticus. Brain MR imaging was performed in children presenting with febrile status epilepticus and control subjects as part of the Consequences of Prolonged Febrile Seizures in Childhood study. Medial temporal lobe morphologic parameters were measured manually, including the distance of the hippocampus from the midline, hippocampal height:width ratio, hippocampal angle, collateral sulcus angle, and width of the temporal horn. Temporal lobe morphologic parameters were correlated with the presence of visual hippocampal malrotation; the strongest association was with left temporal horn width (P status epilepticus, encompassing both the right and left sides. This association was statistically strongest in the right temporal lobe, whereas hippocampal malrotation was almost exclusively left-sided in this cohort. The association between temporal lobe measurements and febrile status epilepticus persisted when the analysis was restricted to cases with visually normal imaging findings without hippocampal malrotation or other visually apparent abnormalities. Several component morphologic features of hippocampal malrotation are independently associated with febrile status epilepticus, even when complete hippocampal malrotation is absent. Unexpectedly, this association predominantly involves the right temporal lobe. These findings suggest that a spectrum of bilateral temporal lobe anomalies are associated with febrile status epilepticus in children. Hippocampal malrotation may represent a visually apparent subset of this spectrum. © 2016 by American Journal of Neuroradiology.

  12. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Magnetic source localization of early visual mismatch response

    NARCIS (Netherlands)

    Susac, A.; Heslenfeld, D.J.; Huonker, R.; Supek, S.

    2014-01-01

    Previous studies have reported a visual analogue of the auditory mismatch negativity (MMN) response that is based on sensory memory. The neural generators and attention dependence of the visual MMN (vMMN) still remain unclear. We used magnetoencephalography (MEG) and spatio-temporal source

  14. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Temporal properties of the lens eyes of the box jellyfish Tripedalia cystophora

    DEFF Research Database (Denmark)

    O'Connor, Megan; Nilsson, Dan-E; Garm, Anders Lydik

    2010-01-01

    Box jellyWsh (Cubomedusae) are visually orientating animals which posses a total of 24 eyes of 4 morphological types; 2 pigment cup eyes (pit eye and slit eye) and 2 lens eyes [upper lens-eye (ule) and lower lens-eye (lle)]. In this study, we use electroretinograms (ERGs) to explore temporal...... properties of the two lens eyes. We Wnd that the ERG of both lens eyes are complex and using sinusoidal Xicker stimuli we Wnd that both lens eyes have slow temporal resolution. The average Xicker fusion frequency (FFF) was found to be approximately 10 Hz for the ule and 8 Hz for the lle. Di......Verences in the FFF and response patterns between the two lens eyes suggest that the ule and lle Wlter information diVerently in the temporal domain and thus are tuned to perform diVerent visual tasks. The data collected in this study support the idea that the visual system of box jellyWsh is a collection of special...

  16. Monitoring Ground Deformation of Subway Area during the Construction Based on the Method of Multi-Temporal Coherent Targets Analysis

    Science.gov (United States)

    Zhang, L.; Wu, J.; Zhao, J.; Yuan, M.

    2018-04-01

    Multi-temporal coherent targets analysis is a high-precision and high-spatial-resolution monitoring method for urban surface deformation based on Differential Synthetic Aperture Radar (DInSAR), and has been successfully applied to measure land subsidence, landslide and strain accumulation caused by fault movement and so on. In this paper, the multi-temporal coherent targets analysis is used to study the settlement of subway area during the period of subway construction. The eastern extension of Shanghai Metro Line. 2 is taking as an example to study the subway settlement during the construction period. The eastern extension of Shanghai Metro Line. 2 starts from Longyang Road and ends at Pudong airport. Its length is 29.9 kilometers from east to west and it is a key transportation line to the Pudong Airport. 17 PalSAR images during 2007 and 2010 are applied to analyze and invert the settlement of the buildings nearby the subway based on the multi-temporal coherent targets analysis. But there are three significant deformation areas nearby the Line 2 between 2007 and 2010, with maximum subsidence rate up to 30 mm/y in LOS. The settlement near the Longyang Road station and Chuansha Town are both caused by newly construction and city expansion. The deformation of the coastal dikes suffer from heavy settlement and the rate is up to -30 mm/y. In general, the area close to the subway line is relatively stable during the construction period.

  17. Middle and Inferior Temporal Gyrus Gray Matter Volume Abnormalities in Chronic Schizophrenia: An MRI Study

    OpenAIRE

    Onitsuka, Toshiaki; Shenton, Martha E.; Salisbury, Dean F.; Dickey, Chandlee C.; Kasai, Kiyoto; Toner, Sarah K.; Frumin, Melissa; Kikinis, Ron; Jolesz, Ferenc A.; McCarley, Robert W.

    2004-01-01

    Objective: The middle temporal gyrus and inferior temporal gyrus subserve language and semantic memory processing, visual perception, and multimodal sensory integration. Functional deficits in these cognitive processes have been well documented in patients with schizophrenia. However, there have been few in vivo structural magnetic resonance imaging (MRI) studies of the middle temporal gyrus and inferior temporal gyrus in schizophrenia. Method: Middle temporal gyrus and inferior temporal gyru...

  18. Temporal attention for visual food stimuli in restrained eaters

    NARCIS (Netherlands)

    Neimeijer, Renate A. M.; de Jong, Peter J.; Roefs, Anne

    2013-01-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require

  19. Long-term music training tunes how the brain temporally binds signals from multiple senses

    OpenAIRE

    Lee, HweeLing; Noppeney, Uta

    2011-01-01

    Practicing a musical instrument is a rich multisensory experience involving the integration of visual, auditory, and tactile inputs with motor responses. This combined psychophysics–fMRI study used the musician's brain to investigate how sensory-motor experience molds temporal binding of auditory and visual signals. Behaviorally, musicians exhibited a narrower temporal integration window than nonmusicians for music but not for speech. At the neural level, musicians showed increased audiovisua...

  20. Multi-temporal Land Use Mapping of Coastal Wetlands Area using Machine Learning in Google Earth Engine

    Science.gov (United States)

    Farda, N. M.

    2017-12-01

    Coastal wetlands provide ecosystem services essential to people and the environment. Changes in coastal wetlands, especially on land use, are important to monitor by utilizing multi-temporal imagery. The Google Earth Engine (GEE) provides many machine learning algorithms (10 algorithms) that are very useful for extracting land use from imagery. The research objective is to explore machine learning in Google Earth Engine and its accuracy for multi-temporal land use mapping of coastal wetland area. Landsat 3 MSS (1978), Landsat 5 TM (1991), Landsat 7 ETM+ (2001), and Landsat 8 OLI (2014) images located in Segara Anakan lagoon are selected to represent multi temporal images. The input for machine learning are visible and near infrared bands, PCA band, invers PCA bands, bare soil index, vegetation index, wetness index, elevation from ASTER GDEM, and GLCM (Harralick) texture, and also polygon samples in 140 locations. There are 10 machine learning algorithms applied to extract coastal wetlands land use from Landsat imagery. The algorithms are Fast Naive Bayes, CART (Classification and Regression Tree), Random Forests, GMO Max Entropy, Perceptron (Multi Class Perceptron), Winnow, Voting SVM, Margin SVM, Pegasos (Primal Estimated sub-GrAdient SOlver for Svm), IKPamir (Intersection Kernel Passive Aggressive Method for Information Retrieval, SVM). Machine learning in Google Earth Engine are very helpful in multi-temporal land use mapping, the highest accuracy for land use mapping of coastal wetland is CART with 96.98 % Overall Accuracy using K-Fold Cross Validation (K = 10). GEE is particularly useful for multi-temporal land use mapping with ready used image and classification algorithms, and also very challenging for other applications.

  1. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex

    DEFF Research Database (Denmark)

    Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.

    2016-01-01

    and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states......Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from...... visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary...

  2. Pathology of Visual Memory in Patients with Epilepsy

    Directory of Open Access Journals (Sweden)

    Reza Pourhosein

    2016-12-01

    Full Text Available Background: Epileptic seizures have destructive effects on the brain, because they intervene in healthy and normal brain processes, and create interference at different stages of memory and cause malfunction in its performance and function, especially in the early years of life. The purpose of this study was to investigate memory as one of the important areas of cognition in patients with epilepsy.Methods: In this causal-comparative study, the subjects consisted of 52 children of 8 to 14 years of age with epilepsy. Among them, 15, 16, and 15 patients had parietal lobe epilepsy, temporal lobe epilepsy, and frontal lobe epilepsy, respectively. The participants were selected among the patients referring to the clinic of a neurologist. Rey-Osterrieth complex figure (ROCF test was used to assess visual memory.Results: The visual memory scores in the epilepsy group were lower than the healthy group and the difference between the two groups was significant (t = 33.76, df = 103, P < 0.001. No significant difference was obtained between the three epilepsy groups in terms of visual memory scores (f = 1.6, df = 2, P < 0.212. In the present research, no significant difference was observed in visual memory between the three epilepsy groups.Conclusion: It can be concluded that patients with epilepsy have impaired visual memory.

  3. Gravity Cues Embedded in the Kinematics of Human Motion Are Detected in Form-from-Motion Areas of the Visual System and in Motor-Related Areas.

    Science.gov (United States)

    Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J J; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine

    2017-01-01

    The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer's motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex.

  4. Body-selective areas in the visual cortex are less active in children than in adults

    NARCIS (Netherlands)

    Ross, Paddy D.; de Gelder, Beatrice; Crabbe, Frances; Grosbras, Marie-Helene

    2014-01-01

    Our ability to read other people's non-verbal signals gets refined throughout childhood and adolescence. How this is paralleled by brain development has been investigated mainly with regards to face perception, showing a protracted functional development of the face-selective visual cortical areas.

  5. Spatial frequency-dependent feedback of visual cortical area 21a modulating functional orientation column maps in areas 17 and 18 of the cat.

    Science.gov (United States)

    Huang, Luoxiu; Chen, Xin; Shou, Tiande

    2004-02-20

    The feedback effect of activity of area 21a on orientation maps of areas 17 and 18 was investigated in cats using intrinsic signal optical imaging. A spatial frequency-dependent decrease in response amplitude of orientation maps to grating stimuli was observed in areas 17 and 18 when area 21a was inactivated by local injection of GABA, or by a lesion induced by liquid nitrogen freezing. The decrease in response amplitude of orientation maps of areas 17 and 18 after the area 21a inactivation paralleled the normal response without the inactivation. Application in area 21a of bicuculline, a GABAa receptor antagonist caused an increase in response amplitude of orientation maps of area 17. The results indicate a positive feedback from high-order visual cortical area 21a to lower-order areas underlying a spatial frequency-dependent mechanism.

  6. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex

    Science.gov (United States)

    Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.; Roland, Per E.

    2016-01-01

    Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes. PMID:27582693

  7. Computerized tomographic visualization of niveau by turning the head in a case of pituitary apoplexy

    International Nuclear Information System (INIS)

    Sasajima, Toshio; Mineura, Katsuyoshi; Kowada, Masayoshi; Sasaki, Junko; Sasajima, Hiroyasu; Sakamoto, Tetsuya

    1987-01-01

    A case of pituitary apoplexy is presented in which a free niveau formation, a pathognomonic sign of this entity, was proved by means of computerized tomography (CT) by turning the head. A 48-year-old female had developed a sudden, excruciating, retroorbital headache, vomiting, and visual disturbance twice prior to admission. The visual acuity was 0.1 in the right eye and 0.6 in the left eye. The optic fundi were normal. There was a right temporal field loss and a left upper temporal quadrantanopsia. A plain skull film disclosed ballooning and a double floor of the turcic sella. A CT scan of the head, performed by means of a GE CT/T 8800 Scanner, showed an intrasellar low-density mass with a slightly enhanced rim. We were not convinced of the presence of a high-density area, though there seemed to be one adjacent to the posterior clinoid process and the turcic floor, because of the partial volume effect and the artifact related to the neighboring bones. Another high-resolution CT scan on the ensuing day, as the head was turned and then kept still at about 45 degrees to the right, while the patient was supine, for ten minutes, made it possible to visualize a free fluid level, comparable to a fluid-blood-density level. A transsphenoidal pituitary exploration identified blood fluid collection in the encapsulated tumor, a finding which was histologically consistent with the sinusoidal type of chromophobe adenoma. There was some microscopic evidence of necrosis and hemosiderin laden cells. The postoperative course was uneventful; visual acuity improved without delay, and the temporal field defect became significantly smaller. Two weeks later, visual acuity had recovered to 1.2 uncorrected in each eye. CT and the pertinent position of the head might be quite helpful for the visualization and confirmation of a subtle free fluid level in cases of pituitary apoplexy. (author)

  8. Immersive Earth Science: Data Visualization in Virtual Reality

    Science.gov (United States)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  9. Research on Visual Analysis Methods of Terrorism Events

    Science.gov (United States)

    Guo, Wenyue; Liu, Haiyan; Yu, Anzhu; Li, Jing

    2016-06-01

    Under the situation that terrorism events occur more and more frequency throughout the world, improving the response capability of social security incidents has become an important aspect to test governments govern ability. Visual analysis has become an important method of event analysing for its advantage of intuitive and effective. To analyse events' spatio-temporal distribution characteristics, correlations among event items and the development trend, terrorism event's spatio-temporal characteristics are discussed. Suitable event data table structure based on "5W" theory is designed. Then, six types of visual analysis are purposed, and how to use thematic map and statistical charts to realize visual analysis on terrorism events is studied. Finally, experiments have been carried out by using the data provided by Global Terrorism Database, and the results of experiments proves the availability of the methods.

  10. Psycho acoustical Measures in Individuals with Congenital Visual Impairment.

    Science.gov (United States)

    Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh

    2017-12-01

    In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.

  11. Dyslexic children lack word selectivity gradients in occipito-temporal and inferior frontal cortex

    Directory of Open Access Journals (Sweden)

    O.A. Olulade

    2015-01-01

    Full Text Available fMRI studies using a region-of-interest approach have revealed that the ventral portion of the left occipito-temporal cortex, which is specialized for orthographic processing of visually presented words (and includes the so-called “visual word form area”, VWFA, is characterized by a posterior-to-anterior gradient of increasing selectivity for words in typically reading adults, adolescents, and children (e.g. Brem et al., 2006, 2009. Similarly, the left inferior frontal cortex (IFC has been shown to exhibit a medial-to-lateral gradient of print selectivity in typically reading adults (Vinckier et al., 2007. Functional brain imaging studies of dyslexia have reported relative underactivity in left hemisphere occipito-temporal and inferior frontal regions using whole-brain analyses during word processing tasks. Hence, the question arises whether gradient sensitivities in these regions are altered in dyslexia. Indeed, a region-of-interest analysis revealed the gradient-specific functional specialization in the occipito-temporal cortex to be disrupted in dyslexic children (van der Mark et al., 2009. Building on these studies, we here (1 investigate if a word-selective gradient exists in the inferior frontal cortex in addition to the occipito-temporal cortex in normally reading children, (2 compare typically reading with dyslexic children, and (3 examine functional connections between these regions in both groups. We replicated the previously reported anterior-to-posterior gradient of increasing selectivity for words in the left occipito-temporal cortex in typically reading children, and its absence in the dyslexic children. Our novel finding is the detection of a pattern of increasing selectivity for words along the medial-to-lateral axis of the left inferior frontal cortex in typically reading children and evidence of functional connectivity between the most lateral aspect of this area and the anterior aspects of the occipito-temporal cortex. We

  12. The global lambda visualization facility: An international ultra-high-definition wide-area visualization collaboratory

    Science.gov (United States)

    Leigh, J.; Renambot, L.; Johnson, Aaron H.; Jeong, B.; Jagodic, R.; Schwarz, N.; Svistula, D.; Singh, R.; Aguilera, J.; Wang, X.; Vishwanath, V.; Lopez, B.; Sandin, D.; Peterka, T.; Girado, J.; Kooima, R.; Ge, J.; Long, L.; Verlo, A.; DeFanti, T.A.; Brown, M.; Cox, D.; Patterson, R.; Dorn, P.; Wefel, P.; Levy, S.; Talandis, J.; Reitzer, J.; Prudhomme, T.; Coffin, T.; Davis, B.; Wielinga, P.; Stolk, B.; Bum, Koo G.; Kim, J.; Han, S.; Corrie, B.; Zimmerman, T.; Boulanger, P.; Garcia, M.

    2006-01-01

    The research outlined in this paper marks an initial global cooperative effort between visualization and collaboration researchers to build a persistent virtual visualization facility linked by ultra-high-speed optical networks. The goal is to enable the comprehensive and synergistic research and development of the necessary hardware, software and interaction techniques to realize the next generation of end-user tools for scientists to collaborate on the global Lambda Grid. This paper outlines some of the visualization research projects that were demonstrated at the iGrid 2005 workshop in San Diego, California.

  13. Long-Term Visuo-Gustatory Appetitive and Aversive Conditioning Potentiate Human Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Gert R. J. Christoffersen

    2017-09-01

    Full Text Available Human recognition of foods and beverages are often based on visual cues associated with flavors. The dynamics of neurophysiological plasticity related to acquisition of such long-term associations has only recently become the target of investigation. In the present work, the effects of appetitive and aversive visuo-gustatory conditioning were studied with high density EEG-recordings focusing on late components in the visual evoked potentials (VEPs, specifically the N2-P3 waves. Unfamiliar images were paired with either a pleasant or an unpleasant juice and VEPs evoked by the images were compared before and 1 day after the pairings. In electrodes located over posterior visual cortex areas, the following changes were observed after conditioning: the amplitude from the N2-peak to the P3-peak increased and the N2 peak delay was reduced. The percentage increase of N2-to-P3 amplitudes was asymmetrically distributed over the posterior hemispheres despite the fact that the images were bilaterally symmetrical across the two visual hemifields. The percentage increases of N2-to-P3 amplitudes in each experimental subject correlated with the subject’s evaluation of positive or negative hedonic valences of the two juices. The results from 118 scalp electrodes gave surface maps of theta power distributions showing increased power over posterior visual areas after the pairings. Source current distributions calculated from swLORETA revealed that visual evoked currents rose as a result of conditioning in five cortical regions—from primary visual areas and into the inferior temporal gyrus (ITG. These learning-induced changes were seen after both appetitive and aversive training while a sham trained control group showed no changes. It is concluded that long-term visuo-gustatory conditioning potentiated the N2-P3 complex, and it is suggested that the changes are regulated by the perceived hedonic valence of the US.

  14. Spatio-temporal dynamics and laterality effects of face inversion, feature presence and configuration, and face outline

    Directory of Open Access Journals (Sweden)

    Ksenija eMarinkovic

    2014-11-01

    Full Text Available Although a crucial role of the fusiform gyrus in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the fusiform gyrus peaked at ~160ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and

  15. Attentional effects in the visual pathways

    DEFF Research Database (Denmark)

    Bundesen, Claus; Larsen, Axel; Kyllingsbæk, Søren

    2002-01-01

    nucleus. Frontal activations were found in a region that seems implicated in visual short-term memory (posterior parts of the superior sulcus and the middle gyrus). The reverse, color-shape comparison showed bilateral increases in rCBF in the anterior cingulate gyri, superior frontal gyri, and superior...... and middle temporal gyri. The attentional effects found by the shape-color comparison in the thalamus and the primary visual cortex may have been generated by feedback signals preserving visual representations of selected stimuli in short-term memory....

  16. The impact of inverted text on visual word processing: An fMRI study.

    Science.gov (United States)

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Learning and memory and its relationship with the lateralization of epileptic focus in subjects with temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Daniel Fuentes

    2014-04-01

    Full Text Available Background : In medial temporal lobe epilepsy (MTLE, previous studies addressing the hemispheric laterality of epileptogenic focus and its relationship with learning and memory processes have reported controversial findings. Objective : To compare the performance of MTLE patients according to the location of the epileptogenic focus on the left (MTLEL or right temporal lobe (MTLER on tasks of episodic learning and memory for verbal and visual content. Methods : One hundred patients with MTLEL and one hundred patients with MTLER were tested with the following tasks: the Rey Auditory Verbal Learning Test (RAVLT and the Logical Memory-WMS-R to evaluate verbal learning and memory; and the Rey Visual Design Learning Test (RVDLT and the Visual Reproduction-WMS-R to evaluate visual learning and memory. Results : The MTLEL sample showed significantly worse performance on the RAVLT (p < 0.005 and on the Logical Memory tests (p < 0.01 than MTLER subjects. However, there were no significant between-group differences in regard to the visual memory tests. Discussion : Our findings suggest that verbal learning and memory abilities are dependent on the structural and functional integrity of the left temporal lobe, while visual abilities are less dependent on the right temporal lobe.

  18. Spatial and temporal variability of rainfall and their effects on hydrological response in urban areas – a review

    Directory of Open Access Journals (Sweden)

    E. Cristiano

    2017-07-01

    Full Text Available In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological response, based on small-scale representation of urban catchment spatial variability. Despite these efforts, interactions between rainfall variability, catchment heterogeneity, and hydrological response remain poorly understood. This paper presents a review of our current understanding of hydrological processes in urban environments as reported in the literature, focusing on their spatial and temporal variability aspects. We review recent findings on the effects of rainfall variability on hydrological response and identify gaps where knowledge needs to be further developed to improve our understanding of and capability to predict urban hydrological response.

  19. Visual acuity and visual field impairment in Usher syndrome.

    Science.gov (United States)

    Edwards, A; Fishman, G A; Anderson, R J; Grover, S; Derlacki, D J

    1998-02-01

    To determine the extent of visual acuity and visual field impairment in patients with types 1 and 2 Usher syndrome. The records of 53 patients with type 1 and 120 patients with type 2 Usher syndrome were reviewed for visual acuity and visual field area at their most recent visit. Visual field areas were determined by planimetry of the II4e and V4e isopters obtained with a Goldmann perimeter. Both ordinary and logistic regression models were used to evaluate differences in visual acuity and visual field impairment between patients with type 1 and type 2 Usher syndrome. The difference in visual acuity of the better eye between patients with type 1 and type 2 varied by patient age (P=.01, based on a multiple regression model). The maximum difference in visual acuity between the 2 groups occurred during the third and fourth decades of life (with the type 1 patients being more impaired), while more similar acuities were seen in both younger and older patients. Fifty-one percent (n=27) of the type 1 patients had a visual acuity of 20/40 or better in at least 1 eye compared with 72% (n=87) of the type 2 patients (age-adjusted odds ratio, 3.9). Visual field area to both the II4e (P=.001) and V4e (Ptype 1 patients than type 2 patients. A concentric central visual field greater than 20 degrees in at least 1 eye was present in 20 (59%) of the available 34 visual fields of type 1 patients compared with 70 (67%) of the available 104 visual fields of type 2 patients (age-adjusted odds ratio, 2.9) with the V4e target and in 6 (21%) of the available 29 visual fields of type 1 patients compared with 36 (38%) of the available 94 visual fields of type 2 patients (age-adjusted odds ratio, 4.9) with the II4e target. The fraction of patients who had a visual acuity of 20/40 or better and a concentric central visual field greater than 20 degrees to the II4e target in at least 1 eye was 17% (n=5) in the type 1 patients and 35% (n=33) in the type 2 patients (age-adjusted odds ratio, 3

  20. EmailTime: visual analytics and statistics for temporal email

    Science.gov (United States)

    Erfani Joorabchi, Minoo; Yim, Ji-Dong; Shaw, Christopher D.

    2011-01-01

    Although the discovery and analysis of communication patterns in large and complex email datasets are difficult tasks, they can be a valuable source of information. We present EmailTime, a visual analysis tool of email correspondence patterns over the course of time that interactively portrays personal and interpersonal networks using the correspondence in the email dataset. Our approach is to put time as a primary variable of interest, and plot emails along a time line. EmailTime helps email dataset explorers interpret archived messages by providing zooming, panning, filtering and highlighting etc. To support analysis, it also measures and visualizes histograms, graph centrality and frequency on the communication graph that can be induced from the email collection. This paper describes EmailTime's capabilities, along with a large case study with Enron email dataset to explore the behaviors of email users within different organizational positions from January 2000 to December 2001. We defined email behavior as the email activity level of people regarding a series of measured metrics e.g. sent and received emails, numbers of email addresses, etc. These metrics were calculated through EmailTime. Results showed specific patterns in the use email within different organizational positions. We suggest that integrating both statistics and visualizations in order to display information about the email datasets may simplify its evaluation.

  1. Category-specific visual responses: an intracranial study comparing gamma, beta, alpha and ERP response selectivity

    Directory of Open Access Journals (Sweden)

    Juan R Vidal

    2010-11-01

    Full Text Available The specificity of neural responses to visual objects is a major topic in visual neuroscience. In humans, functional magnetic resonance imaging (fMRI studies have identified several regions of the occipital and temporal lobe that appear specific to faces, letter-strings, scenes, or tools. Direct electrophysiological recordings in the visual cortical areas of epileptic patients have largely confirmed this modular organization, using either single-neuron peri-stimulus time-histogram or intracerebral event-related potentials (iERP. In parallel, a new research stream has emerged using high-frequency gamma-band activity (50-150 Hz (GBR and low-frequency alpha/beta activity (8-24 Hz (ABR to map functional networks in humans. An obvious question is now whether the functional organization of the visual cortex revealed by fMRI, ERP, GBR, and ABR coincide. We used direct intracerebral recordings in 18 epileptic patients to directly compare GBR, ABR, and ERP elicited by the presentation of seven major visual object categories (faces, scenes, houses, consonants, pseudowords, tools, and animals, in relation to previous fMRI studies. Remarkably both GBR and iERP showed strong category-specificity that was in many cases sufficient to infer stimulus object category from the neural response at single-trial level. However, we also found a strong discrepancy between the selectivity of GBR, ABR, and ERP with less than 10% of spatial overlap between sites eliciting the same category-specificity. Overall, we found that selective neural responses to visual objects were broadly distributed in the brain with a prominent spatial cluster located in the posterior temporal cortex. Moreover, the different neural markers (GBR, ABR, and iERP that elicit selectivity towards specific visual object categories present little spatial overlap suggesting that the information content of each marker can uniquely characterize high-level visual information in the brain.

  2. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    International Nuclear Information System (INIS)

    Nasaruddin, N H; Yusoff, A N; Kaur, S

    2014-01-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus

  3. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    Science.gov (United States)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  4. Temporal Coherence Strategies for Augmented Reality Labeling

    DEFF Research Database (Denmark)

    Madsen, Jacob Boesen; Tatzgern, Markus; Madsen, Claus B.

    2016-01-01

    Temporal coherence of annotations is an important factor in augmented reality user interfaces and for information visualization. In this paper, we empirically evaluate four different techniques for annotation. Based on these findings, we follow up with subjective evaluations in a second experiment...

  5. DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data

    Science.gov (United States)

    Husar, R. B.; Hoijarvi, K.

    2017-12-01

    DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.

  6. Gender differences in pre-attentive change detection for visual but not auditory stimuli.

    Science.gov (United States)

    Yang, Xiuxian; Yu, Yunmiao; Chen, Lu; Sun, Hailian; Qiao, Zhengxue; Qiu, Xiaohui; Zhang, Congpei; Wang, Lin; Zhu, Xiongzhao; He, Jincai; Zhao, Lun; Yang, Yanjie

    2016-01-01

    Despite ongoing debate about gender differences in pre-attention processes, little is known about gender effects on change detection for auditory and visual stimuli. We explored gender differences in change detection while processing duration information in auditory and visual modalities. We investigated pre-attentive processing of duration information using a deviant-standard reverse oddball paradigm (50 ms/150 ms) for auditory and visual mismatch negativity (aMMN and vMMN) in males and females (n=21/group). In the auditory modality, decrement and increment aMMN were observed at 150-250 ms after the stimulus onset, and there was no significant gender effect on MMN amplitudes in temporal or fronto-central areas. In contrast, in the visual modality, only increment vMMN was observed at 180-260 ms after the onset of stimulus, and it was higher in males than in females. No gender effect was found in change detection for auditory stimuli, but change detection was facilitated for visual stimuli in males. Gender effects should be considered in clinical studies of pre-attention for visual stimuli. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Practical use of visual medial temporal lobe atrophy cut-off scores in Alzheimer's disease: Validation in a large memory clinic population

    International Nuclear Information System (INIS)

    Claus, Jules J.; Holl, Dana C.; Roorda, Jelmen J.; Staekenborg, Salka S.; Schuur, Jacqueline; Koster, Pieter; Tielkes, Caroline E.M.; Scheltens, Philip

    2017-01-01

    To provide age-specific medial temporal lobe atrophy (MTA) cut-off scores for routine clinical practice as marker for Alzheimer's disease (AD). Patients with AD (n = 832, mean age 81.8 years) were compared with patients with subjective cognitive impairment (n = 333, mean age 71.8 years) in a large single-centre memory clinic. Mean of right and left MTA scores was determined with visual rating (Scheltens scale) using CT (0, no atrophy to 4, severe atrophy). Relationships between age and MTA scores were analysed with regression analysis. For various MTA cut-off scores, decade-specific sensitivity and specificity and area under the curve (AUC) values, computed with receiver operator characteristic curves, were determined. MTA strongly increased with age in both groups to a similar degree. Optimal MTA cut-off values for the age ranges <65, 65-74, 75-84 and ≥85 were: ≥1.0, ≥1.5, ≥ 2.0 and ≥2.0. Corresponding values of sensitivity and specificity were 83.3% and 86.4%; 73.7% and 84.6%; 73.7% and 76.2%; and 84.0% and 62.5%. From this large unique memory clinic cohort we suggest decade-specific MTA cut-off scores for clinical use. After age 85 years, however, the practical usefulness of the MTA cut-off is limited. (orig.)

  8. Project DyAdd: Visual Attention in Adult Dyslexia and ADHD

    Science.gov (United States)

    Laasonen, Marja; Salomaa, Jonna; Cousineau, Denis; Leppamaki, Sami; Tani, Pekka; Hokkanen, Laura; Dye, Matthew

    2012-01-01

    In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55 years) with dyslexia (n = 35) or attention deficit/hyperactivity disorder (ADHD, n = 22), and in healthy controls (n = 35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention…

  9. a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data

    Science.gov (United States)

    Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.

    2017-09-01

    Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.

  10. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    Science.gov (United States)

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Análises visual e volumétrica por ressonância magnética das formações hipocampais em um grupo de pacientes com diagnóstico clínico de epilepsia do lobo temporal

    Directory of Open Access Journals (Sweden)

    ROGACHESKI ÊNIO

    1998-01-01

    Full Text Available Visando comparar a sensibilidade da avaliação volumétrica com a análise visual na avaliação por ressonância magnética (RM das formações hipocampais de pacientes com epilepsia do lobo temporal refratária e candidatos à lobectomia temporal, estudamos 153 casos com diagnóstico clínico de epilepsia do lobo temporal, utilizando um equipamento de 0,5 Tesla, com técnica de inversion-recovery ponderada em T1, com cortes de 5 mm no plano coronal. Houve boa concordância entre a análise visual prospectiva e outra retrospectiva, realizada por dois observadores independentes (C=0,748 e 0,720. Houve também concordância entre a análise retrospectiva dos dois observadores (C=0,733. Houve ainda concordância genuína (C=0,788 entre os resultados das análises quantitativa e qualitativa realizadas prospectivamente. A análise quantitativa demonstrou uma tendência não-significativa a lateralizar mais casos de atrofia hipocampal presumida clinicamente. Nossos resultados confirmam a confiabilidade da análise visual qualitativa e indicam a utilidade da volumetria hipocampal como uma medida suplementar, objetiva e quantitativa, de esclerose hipocampal.

  12. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    Science.gov (United States)

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  13. Large-Area Landslides Monitoring Using Advanced Multi-Temporal InSAR Technique over the Giant Panda Habitat, Sichuan, China

    Directory of Open Access Journals (Sweden)

    Panpan Tang

    2015-07-01

    Full Text Available The region near Dujiangyan City and Wenchuan County, Sichuan China, including significant giant panda habitats, was severely impacted by the Wenchuan earthquake. Large-area landslides occurred and seriously threatened the lives of people and giant pandas. In this paper, we report the development of an enhanced multi-temporal interferometric synthetic aperture radar (MTInSAR methodology to monitor potential post-seismic landslides by analyzing coherent scatterers (CS and distributed scatterers (DS points extracted from multi-temporal l-band ALOS/PALSAR data in an integrated manner. Through the integration of phase optimization and mitigation of the orbit and topography-related phase errors, surface deformations in the study area were derived: the rates in the line of sight (LOS direction ranged from −7 to 1.5 cm/a. Dozens of potential landslides, distributed mainly along the Minjiang River, Longmenshan Fault, and in other the high-altitude areas were detected. These findings matched the distribution of previous landslides. InSAR-derived results demonstrated that some previous landslides were still active; many unstable slopes have developed, and there are significant probabilities of future massive failures. The impact of landslides on the giant panda habitat, however ranged from low to moderate, would continue to be a concern for conservationists for some time in the future.

  14. Visual Temporal Logic as a Rapid Prototying Tool

    DEFF Research Database (Denmark)

    Fränzle, Martin; Lüth, Karsten

    2001-01-01

    Within this survey article, we explain real-time symbolic timing diagrams and the ICOS tool-box supporting timing-diagram-based requirements capture and rapid prototyping. Real-time symbolic timing diagrams are a full-fledged metric-time temporal logic, but with a graphical syntax reminiscent...... of the informal timing diagrams widely used in electrical engineering. ICOS integrates a variety of tools, ranging from graphical specification editors over tautology checking and counterexample generation to code generators emitting C or VHDL, thus bridging the gap from formal specification to rapid prototype...

  15. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  16. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    Science.gov (United States)

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  18. Task-specific impairments and enhancements induced by magnetic stimulation of human visual area V5.

    OpenAIRE

    Walsh, V; Ellison, A; Battelli, L; Cowey, A

    1998-01-01

    Transcranial magnetic stimulation (TMS) can be used to simulate the effects of highly circumscribed brain damage permanently present in some neuropsychological patients, by reversibly disrupting the normal functioning of the cortical area to which it is applied. By using TMS we attempted to recreate deficits similar to those reported in a motion-blind patient and to assess the specificity of deficits when TMS is applied over human area V5. We used six visual search tasks and showed that subje...

  19. Patterns of resting state connectivity in human primary visual cortical areas: a 7T fMRI study.

    Science.gov (United States)

    Raemaekers, Mathijs; Schellekens, Wouter; van Wezel, Richard J A; Petridou, Natalia; Kristo, Gert; Ramsey, Nick F

    2014-01-01

    The nature and origin of fMRI resting state fluctuations and connectivity are still not fully known. More detailed knowledge on the relationship between resting state patterns and brain function may help to elucidate this matter. We therefore performed an in depth study of how resting state fluctuations map to the well known architecture of the visual system. We investigated resting state connectivity at both a fine and large scale within and across visual areas V1, V2 and V3 in ten human subjects using a 7Tesla scanner. We found evidence for several coexisting and overlapping connectivity structures at different spatial scales. At the fine-scale level we found enhanced connectivity between the same topographic locations in the fieldmaps of V1, V2 and V3, enhanced connectivity to the contralateral functional homologue, and to a lesser extent enhanced connectivity between iso-eccentric locations within the same visual area. However, by far the largest proportion of the resting state fluctuations occurred within large-scale bilateral networks. These large-scale networks mapped to some extent onto the architecture of the visual system and could thereby obscure fine-scale connectivity. In fact, most of the fine-scale connectivity only became apparent after the large-scale network fluctuations were filtered from the timeseries. We conclude that fMRI resting state fluctuations in the visual cortex may in fact be a composite signal of different overlapping sources. Isolating the different sources could enhance correlations between BOLD and electrophysiological correlates of resting state activity. © 2013 Elsevier Inc. All rights reserved.

  20. Temporal Order Processing in Adult Dyslexics.

    Science.gov (United States)

    Maxwell, David L.; And Others

    This study investigated the premise that disordered temporal order perception in retarded readers can be seen in the serial processing of both nonverbal auditory and visual information, and examined whether such information processing deficits relate to level of reading ability. The adult subjects included 20 in the dyslexic group, 12 in the…

  1. The dynamics of visual experience, an EEG study of subjective pattern formation.

    Directory of Open Access Journals (Sweden)

    Mark A Elliott

    Full Text Available BACKGROUND: Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. METHODOLOGY/PRINCIPAL FINDINGS: Using independent-component analysis (ICA we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG. The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns or a series of high-frequency harmonics of a delta oscillation (spiral patterns. CONCLUSIONS/SIGNIFICANCE: Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation.

  2. Temporal Synchrony Detection and Associations with Language in Young Children with ASD

    Directory of Open Access Journals (Sweden)

    Elena Patten

    2014-01-01

    Full Text Available Temporally synchronous audio-visual stimuli serve to recruit attention and enhance learning, including language learning in infants. Although few studies have examined this effect on children with autism, it appears that the ability to detect temporal synchrony between auditory and visual stimuli may be impaired, particularly given social-linguistic stimuli delivered via oral movement and spoken language pairings. However, children with autism can detect audio-visual synchrony given nonsocial stimuli (objects dropping and their corresponding sounds. We tested whether preschool children with autism could detect audio-visual synchrony given video recordings of linguistic stimuli paired with movement of related toys in the absence of faces. As a group, children with autism demonstrated the ability to detect audio-visual synchrony. Further, the amount of time they attended to the synchronous condition was positively correlated with receptive language. Findings suggest that object manipulations may enhance multisensory processing in linguistic contexts. Moreover, associations between synchrony detection and language development suggest that better processing of multisensory stimuli may guide and direct attention to communicative events thus enhancing linguistic development.

  3. Cortical networks involved in visual awareness independent of visual attention

    OpenAIRE

    Webb, Taylor W.; Igelström, Kajsa M.; Schurger, Aaron; Graziano, Michael S. A.

    2016-01-01

    Do specific areas of the brain participate in subjective visual experience? We measured brain activity in humans using fMRI. Participants were aware of a visual stimulus in one condition and unaware of it in another condition. The two conditions were balanced for their effect on visual attention. Specific brain areas were more active in the aware than in the unaware condition, suggesting they were involved in subjective awareness independent of attention. The largest cluster of activity was f...

  4. Temporal Correlations and Neural Spike Train Entropy

    International Nuclear Information System (INIS)

    Schultz, Simon R.; Panzeri, Stefano

    2001-01-01

    Sampling considerations limit the experimental conditions under which information theoretic analyses of neurophysiological data yield reliable results. We develop a procedure for computing the full temporal entropy and information of ensembles of neural spike trains, which performs reliably for limited samples of data. This approach also yields insight to the role of correlations between spikes in temporal coding mechanisms. The method, when applied to recordings from complex cells of the monkey primary visual cortex, results in lower rms error information estimates in comparison to a 'brute force' approach

  5. Saccade-induced image motion cannot account for post-saccadic enhancement of visual processing in primate MST

    Directory of Open Access Journals (Sweden)

    Shaun L Cloherty

    2015-09-01

    Full Text Available Primates use saccadic eye movements to make gaze changes. In many visual areas, including the dorsal medial superior temporal area (MSTd of macaques, neural responses to visual stimuli are reduced during saccades but enhanced afterwards. How does this enhancement arise – from an internal mechanism associated with saccade generation or through visual mechanisms activated by the saccade sweeping the image of the visual scene across the retina? Spontaneous activity in MSTd is elevated even after saccades made in darkness, suggesting a central mechanism for post-saccadic enhancement. However, based on the timing of this effect, it may arise from a different mechanism than occurs in normal vision. Like neural responses in MSTd, initial ocular following eye speed is enhanced after saccades, with evidence suggesting both internal and visually mediated mechanisms. Here we recorded from visual neurons in MSTd and measured responses to motion stimuli presented soon after saccades and soon after simulated saccades – saccade-like displacements of the background image during fixation. We found that neural responses in MSTd were enhanced when preceded by real saccades but not when preceded by simulated saccades. Furthermore, we also observed enhancement following real saccades made across a blank screen that generated no motion signal within the recorded neurons’ receptive fields. We conclude that in MSTd the mechanism leading to post-saccadic enhancement has internal origins.

  6. Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: evidence from a case-series of patients with ventral occipito-temporal cortex damage.

    Science.gov (United States)

    Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A

    2013-11-01

    Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.

  7. Multiple Temporalities, Layered Histories

    Directory of Open Access Journals (Sweden)

    Steven Pearson

    2017-11-01

    Full Text Available In Quotational Practices: Repeating the Future in Contemporary Art, Patrick Greaney asserts, “the past matters not only because of what actually happened but also because of the possibilities that were not realized and that still could be. Quotation evokes those possibilities. By repeating the past, artists and writers may be attempting to repeat that past’s unrealized futures.”[1]  In the information age, the Internet, for instance, provides us an expanded collection of visual information—quite literally available at our fingertips—summoning together aspects of the past and possibilities of the future into a boundless present. Sketchbook Revisions (2014–2015, a series of mixed-media paintings, represents my attempt to communicate the ways in which I experience my contemporary moment constructed from multiple temporalities excavated from my past. This body of work combines fragments of representational paintings created between 1995 and 2003 and nonrepresentational renderings produced between 2003 and 2014. Using traditional tracing paper and graphic color, I randomly select moments of my previous work to transfer and layer over selected areas of already-filled pages of a sketchbook I used from 2003 to 2004. These sketches depict objects I encountered in studio art classrooms and iconic architecture on the campus of McDaniel College, and often incorporate teaching notes. The final renditions of fragmented and layered histories enact the ways that we collectively experience multiple temporalities in the present. Quoting my various bodies of work, Sketchbook Revisions challenges both material and conceptual boundaries that determine fixed notions of artistic identity.

  8. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    Science.gov (United States)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  9. A dual-route perspective on brain activation in response to visual words: evidence for a length by lexicality interaction in the visual word form area (VWFA).

    Science.gov (United States)

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-02-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. Noninvasive studies of human visual cortex using neuromagnetic techniques

    International Nuclear Information System (INIS)

    Aine, C.J.; George, J.S.; Supek, S.; Maclin, E.L.

    1990-01-01

    The major goals of noninvasive studies of the human visual cortex are: to increase knowledge of the functional organization of cortical visual pathways; and to develop noninvasive clinical tests for the assessment of cortical function. Noninvasive techniques suitable for studies of the structure and function of human visual cortex include magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission tomography (SPECT), scalp recorded event-related potentials (ERPs), and event-related magnetic fields (ERFs). The primary challenge faced by noninvasive functional measures is to optimize the spatial and temporal resolution of the measurement and analytic techniques in order to effectively characterize the spatial and temporal variations in patterns of neuronal activity. In this paper we review the use of neuromagnetic techniques for this purpose. 8 refs., 3 figs

  11. Spatio-Temporal Analysis of Urban Crime Pattern and its Implication for Abuja Municipal Area Council, Nigeria

    OpenAIRE

    Taiye Oluwafemi Adewuyi; Patrick Ali Eneji; Anthonia Silas Baduku; Emmanuel Ajayi Olofin

    2017-01-01

    This study examined the spatio-temporal analysis of urban crime pattern and its implication for Abuja Municipal Area Council of the Federal Capital Territory of Nigeria; it has the aim of using Geographical Information System to improve criminal justice system. The aim was achieved by establishing crime incident spots, types of crime committed, the time it occurred and factors responsible for prevailing crime. The methods for data collection involved Geoinformatics through the use of remote s...

  12. Temporal resolution for the perception of features and conjunctions.

    Science.gov (United States)

    Bodelón, Clara; Fallah, Mazyar; Reynolds, John H

    2007-01-24

    The visual system decomposes stimuli into their constituent features, represented by neurons with different feature selectivities. How the signals carried by these feature-selective neurons are integrated into coherent object representations is unknown. To constrain the set of possible integrative mechanisms, we quantified the temporal resolution of perception for color, orientation, and conjunctions of these two features. We find that temporal resolution is measurably higher for each feature than for their conjunction, indicating that time is required to integrate features into a perceptual whole. This finding places temporal limits on the mechanisms that could mediate this form of perceptual integration.

  13. Temporal Processing Development in Chinese Primary School-Aged Children with Dyslexia

    Science.gov (United States)

    Wang, Li-Chih; Yang, Hsien-Ming

    2018-01-01

    This study aimed to investigate the development of visual and auditory temporal processing among children with and without dyslexia and to examine the roles of temporal processing in reading and reading-related abilities. A total of 362 Chinese children in Grades 1-6 were recruited from Taiwan. Half of the children had dyslexia, and the other half…

  14. TEMPORAL AND SPATIAL ANALYSIS OF EXTREME RAINFALL ON THE SLOPE AREA OF MT. MERAPI

    Directory of Open Access Journals (Sweden)

    Dhian Dharma Prayuda

    2015-02-01

    Full Text Available Rainfall has temporal and spatial characteristics with certain pattern which are affected by topographic variations and climatology of an area. The intensity of extreme rainfall is one of important characteristics related to the trigger factors for debris flow. This research will discuss the result of analysis on short duration rainfall data in the south and west slope of Mt. Merapi. Measured hourly rainfall data in 14 rainfall stations for the last 27 years were used as analysis input. The rainfall intensity-duration-frequency relationship (IDF was derived using empirical formula of Sherman, Kimijima, Haspers, and Mononobe method. The analysis on the characteristics of extreme rainfall intensity was performed by conducting spatial interpolation using Inverse Distance Weighted (IDW method. Result of analysis shows that IDF of rainfall in the research area fits to Sherman’s formula. Besides, the spatial distribution pattern of maximum rainfall intensity was assessed on the basis of area rainfall. Furthermore, the difference on the result of spatial map for one hour extreme rainfall based on isolated event and non-isolated event method can be evaluated. The result of this preliminary research is expected to be inputs in the establishment of debris flow early warning in Mt. Merapi slope area.

  15. Modeling Geometric-Temporal Context With Directional Pyramid Co-Occurrence for Action Recognition.

    Science.gov (United States)

    Yuan, Chunfeng; Li, Xi; Hu, Weiming; Ling, Haibin; Maybank, Stephen J

    2014-02-01

    In this paper, we present a new geometric-temporal representation for visual action recognition based on local spatio-temporal features. First, we propose a modified covariance descriptor under the log-Euclidean Riemannian metric to represent the spatio-temporal cuboids detected in the video sequences. Compared with previously proposed covariance descriptors, our descriptor can be measured and clustered in Euclidian space. Second, to capture the geometric-temporal contextual information, we construct a directional pyramid co-occurrence matrix (DPCM) to describe the spatio-temporal distribution of the vector-quantized local feature descriptors extracted from a video. DPCM characterizes the co-occurrence statistics of local features as well as the spatio-temporal positional relationships among the concurrent features. These statistics provide strong descriptive power for action recognition. To use DPCM for action recognition, we propose a directional pyramid co-occurrence matching kernel to measure the similarity of videos. The proposed method achieves the state-of-the-art performance and improves on the recognition performance of the bag-of-visual-words (BOVWs) models by a large margin on six public data sets. For example, on the KTH data set, it achieves 98.78% accuracy while the BOVW approach only achieves 88.06%. On both Weizmann and UCF CIL data sets, the highest possible accuracy of 100% is achieved.

  16. Event Based Simulator for Parallel Computing over the Wide Area Network for Real Time Visualization

    Science.gov (United States)

    Sundararajan, Elankovan; Harwood, Aaron; Kotagiri, Ramamohanarao; Satria Prabuwono, Anton

    As the computational requirement of applications in computational science continues to grow tremendously, the use of computational resources distributed across the Wide Area Network (WAN) becomes advantageous. However, not all applications can be executed over the WAN due to communication overhead that can drastically slowdown the computation. In this paper, we introduce an event based simulator to investigate the performance of parallel algorithms executed over the WAN. The event based simulator known as SIMPAR (SIMulator for PARallel computation), simulates the actual computations and communications involved in parallel computation over the WAN using time stamps. Visualization of real time applications require steady stream of processed data flow for visualization purposes. Hence, SIMPAR may prove to be a valuable tool to investigate types of applications and computing resource requirements to provide uninterrupted flow of processed data for real time visualization purposes. The results obtained from the simulation show concurrence with the expected performance using the L-BSP model.

  17. Task activation and functional connectivity show concordant memory laterality in temporal lobe epilepsy.

    Science.gov (United States)

    Sideman, Noah; Chaitanya, Ganne; He, Xiaosong; Doucet, Gaelle; Kim, Na Young; Sperling, Michael R; Sharan, Ashwini D; Tracy, Joseph I

    2018-04-01

    In epilepsy, asymmetries in the organization of mesial temporal lobe (MTL) functions help determine the cognitive risk associated with procedures such as anterior temporal lobectomy. Past studies have investigated the change/shift in a visual episodic memory laterality index (LI) in mesial temporal lobe structures through functional magnetic resonance imaging (fMRI) task activations. Here, we examine whether underlying task-related functional connectivity (FC) is concordant with such standard fMRI laterality measures. A total of 56 patients with temporal lobe epilepsy (TLE) (Left TLE [LTLE]: 31; Right TLE [RTLE]: 25) and 34 matched healthy controls (HC) underwent fMRI scanning during performance of a scene encoding task (SET). We assessed an activation-based LI of the hippocampal gyrus (HG) and parahippocampal gyrus (PHG) during the SET and its correspondence with task-related FC measures. Analyses involving the HG and PHG showed that the patients with LTLE had a consistently higher LI (right-lateralized) than that of the HC and group with RTLE, indicating functional reorganization. The patients with RTLE did not display a reliable contralateral shift away from the pathology, with the mesial structures showing quite distinct laterality patterns (HG, no laterality bias; PHG, no evidence of LI shift). The FC data for the group with LTLE provided confirmation of reorganization effects, revealing that a rightward task LI may be based on underlying connections between several left-sided regions (middle/superior occipital and left medial frontal gyri) and the right PHG. The FCs between the right HG and left anterior cingulate/medial frontal gyri were also observed in LTLE. Importantly, the data demonstrate that the areas involved in the LTLE task activation shift to the right hemisphere showed a corresponding increase in task-related FCs between the hemispheres. Altered laterality patterns based on mesial temporal lobe epilepsy (MTLE) pathology manifest as several

  18. Visual Working Memory Is Independent of the Cortical Spacing Between Memoranda.

    Science.gov (United States)

    Harrison, William J; Bays, Paul M

    2018-03-21

    The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural

  19. Magnifying visual target information and the role of eye movements in motor sequence learning.

    Science.gov (United States)

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Novel mathematical neural models for visual attention

    DEFF Research Database (Denmark)

    Li, Kang

    for the visual attention theories and spiking neuron models for single spike trains. Statistical inference and model selection are performed and various numerical methods are explored. The designed methods also give a framework for neural coding under visual attention theories. We conduct both analysis on real......Visual attention has been extensively studied in psychology, but some fundamental questions remain controversial. We focus on two questions in this study. First, we investigate how a neuron in visual cortex responds to multiple stimuli inside the receptive eld, described by either a response...... system, supported by simulation study. Finally, we present the decoding of multiple temporal stimuli under these visual attention theories, also in a realistic biophysical situation with simulations....